Improving the Decision-Making Qualities of Gaming Simulations
Bill Roungasa, Femke Bekiusb, Alexander Verbraecka, and Sebastiaan Meijerc
aDelft University of Technology, The Netherlands
bRadboud University Nijmegen, The Netherlands
cKTH Royal Institute of Technology, Sweden
Gaming simulations (games) for policy and decision making have been the ne-
glected “sibling” of educational and training games. The latter have experienced
a widespread usage by practitioners and researchers, while the former have had lim-
ited, yet slowly increasing, adoption by organisations. As a result, various issues
developing and using these games remain unaddressed. This includes the design of
games, their validation, the actual game sessions, and applying the resulting knowl-
edge from games in organisations. In this paper, solutions for issues identiﬁed in
these four areas of gaming simulations are proposed. Solutions vary from purely an-
alytical to purely social, stressing the interdisciplinary approach required to tackle
the issues associated with them. The result consists of several theoretical and prac-
tical contributions as well as philosophical considerations regarding games for policy
and decision making.
gaming simulations, decision making, complexity, knowledge management, game
design, game theory, validation, debrieﬁng
Gaming simulations, hereinafter referred to as games, have been closely related with
systems, and particularly with systems characterised as complex (Wardaszko, 2018).
Over the past decades we have witnessed an evolution of systems theory focusing
on purely technical systems or social systems (Kornhauser, 1934), to socio-technical
systems (STS) (Trist and Bamforth, 1951) and all the way to complex (adaptive)
systems (CAS) (Holland, 1992). Within the increasing complexity of systems within
our society, simulations and games can play an important role, as they oﬀer us insight
into the behaviour of the system and into the relation between interventions and
performance. An interesting question to answer for complex systems is: What do games
have to oﬀer, that other tools do not, that can help us to understand systems?
Pure simulations, hereinafter referred to as simulations, have been perhaps the most
popular tool for analysing systems. But as systems become more complex, and espe-
cially when that complexity is attributed to the bounded rationality of humans (Simon,
1957), it becomes cumbersome for simulations to capture the sometimes unpredictable
behaviour of the increasing number of (human or organisational) actors within sys-
tems. Of course, the ﬁeld of simulation is not static and major advances in areas like
CONTACT Bill Roungas. Email: firstname.lastname@example.org
agent-based modelling (ABM) (Macal and North, 2010; Secchi, 2015, 2017) and Sys-
tems Dynamics (SD) (Akkermans and Van Oorschot, 2005; Li et al., 2018; Morgan
et al., 2017) have taken place, modelling aspects of human or organisational activity
that previously were not considered feasible. Nevertheless, these ﬁelds, and particu-
larly ABM, are still evolving and have a long way to go until they can fully capture
the richness of human activity (Bazghandi, 2012). This is perhaps the most critical
limitation of analytical science, i.e. a set of quantitative data-intensive methods, with
regards to analysing human-enabled systems, and a gap that games can ﬁll, and have
in many occasions have ﬁlled. Certainly, modelling methods like ABM and SD are
also heavily used within games, not just simulations, and it would be a shortsighted
to separate them from games. Still, games, on top of the simulation model and re-
gardless of the method used to model the system under study, have the additional
characteristic of including humans, which gives them the unique advantage of not just
relying on stochasticity but also incorporating the actual human actors, who are part
of the real system. Therefore, the answer to the question posed in the previous para-
graph is that games can help us grasp, both on an individual and on a collective level,
the richness and the subsequent complexity of systems through coupling the rigour
of analytical science with the social problem-solving nature of design science (Klab-
bers, 2018; Raghothama and Meijer, 2018). This paper acknowledges the importance
of modelling methods but is more concerned with understanding the unknown factors
introduced by humans.
Despite games being a relatively mature ﬁeld with more than 50 years of academic
and applied history (Duke, 1974; Pruitt and Kimmel, 1977), it is still lacking systematic
and rigorous methodologies for designing, validating, executing a game session, and
managing the knowledge derived. While this holds for all types of games, this paper
focuses on games for policy and decision making, hereinafter referred to as games for
P&DM, and particularly applied for engineering systems. On the one hand, engineering
systems are a typical example of CAS, due to their large scale, long lifetime, and close
proximity to social systems, which evoke complex features such as adaptation, self-
organisation, and emergence (Ottino, 2004). On the other hand, there are some distinct
characteristics that diﬀerentiate games for P&DM from games for learning, situation
awareness or hypothesis testing. In order to clarify the diﬀerence between P&DM
games and other types of games, a characterisation of game types is needed. Being
more focused on engineering systems, this paper adopts the characterisation proposed
by Grogan and Meijer (2017) shown in Table 1.
The characterisation is based on two criteria, the type of knowledge generated by the
game and the stakeholders who are the beneﬁciaries of this knowledge. With regards
to the type of knowledge generated, the authors distinguish between two categories: i.
Generalisable, meaning that the knowledge acquired during the game provides broad
insights beyond the scope of a particular game scenario, and ii. Contextual, meaning
that the knowledge acquired during the game provides deep insights closely related to a
particular game scenario. With regards to the beneﬁciary of the generated knowledge,
Grogan and Meijer (2017) again distinguish between two categories: i. Participant,
meaning that the beneﬁciaries of the knowledge acquired during the game are the
persons who play the game, and ii. Principal, meaning that the beneﬁciaries of the
knowledge acquired during the game are stakeholders other than the participants, like
decision makers, researchers etc. Games for P&DM are games that generate contextual
knowledge and the knowledge beneﬁciary is the principal, in which case participants
are usually experts on the role they play in the game.
It should be noted that this categorisation is not absolute, in the sense that the
knowledge type and beneﬁciary are not boolean variables. Instead, the categorisation
should be treated more like a continuum, where some games fall more in the Gener-
alisable/Participant quadrant, hence considered to be more focused on learning, some
other games fall in the Contextual/Principal quadrant, hence considered to be more
focused on design or decision making, and so forth. Moreover, while the term learn-
ing can be interpreted in a broad sense, in this context learning refers to games that
have been explicitly designed to teach a particular subject or curriculum. Hence, even
though learning indeed happens when participants play a game for P&DM, the pur-
pose of such a game is not to teach, and it is therefore distinguished from games for
Table 1. Canonical Applications of Gaming Methods (Grogan and Meijer, 2017).
Knowledge Type Knowledge Beneﬁciary
Experiential learning Hypothesis generation and testing
Dangerous tasks Artefact assessment
Organisational learning Interactive visualisation
Policy intervention Collaborative design
In addition to the above characterisation, games for P&DM usually have a limited
number, or even complete lack, of rules. The term rules does not refer to the rules
governing the translation of a system into a game (Klabbers, 2009) but rather the
freedom, or lack thereof, of players to explore diﬀerent alternatives within the game
environment. These games are also called open games (Klabbers, 2009). Open games
have the unique advantages of oﬀering a platform for exploration, where new and
innovative ideas are pursued in a safe environment. Nevertheless, open games also
come with their own unique challenges, one of which is the diﬃculty to facilitate a
game of which the outcome is not just unknown but it also does not belong to a set
of known outcomes.
Unlike games for learning/training, which enjoy a much more widespread adaptation
resulting in an extensive body of knowledge in the policy and management domains
(e.g., Barnab`e (2016); Van der Zee and Slomp (2009)), games for P&DM are not de-
scribed extensively in literature, because of their limited adaptation by organisations.
Hence, this paper’s goals are ﬁrst to provide a clear understanding on the complexities
of games for P&DM and then, based on research gaps identiﬁed by Roungas et al.
(2018f), to describe solutions for four aspects of the game lifecycle; design, validation,
game sessions, and knowledge management. These four areas of inquiry were chosen
because they cover the whole lifecycle of games (Roungas, 2019), in the sense that a
game starts with the design, part of which are the requirements; then the game should
be validated, in order for its results to be credible; a game session is the core of a
game’s lifecycle; and ﬁnally, knowledge management is where the results of a game are
put into action. Each area as such can be found in literature, but to the best of our
knowledge, this research describes their joint analysis for the ﬁrst time. These four
areas do not necessarily follow a linear, waterfall-like, timeline; they regularly inter-
twine and provide feedback to each other, which is also discussed later in this paper.
The aim of the paper is twofold: i. propose solutions to gaps identiﬁed, extending the
current relatively limited literature, and ii. bridge the gap between the design and
analytical communities by acknowledging their incompatibilities and then providing
a fertile ground for discussion and future research. The second aim stems from the
increased complexity of modern systems, explained in detail in Section 2, which asks
for new interdisciplinary methods to analyse them. Hence, this 2nd goal is connected
with the 1st, in the sense that the proposed methods are interdisciplinary and aim at
not only providing solutions to identiﬁed gaps but also at bringing awareness on the
level of complexity and as a result, on the need and the potential beneﬁts from the
cooperation of the design and analytical communities.
In Section 2, the particularities of games for P&DM are described, while the com-
plexity characterising them is explored in more detail. Section 3 to Section 6 propose
solutions regarding the identiﬁed gaps, in the areas of design, validation, game sessions,
and knowledge management, respectively. In Section 7, ﬁnal remarks are made.
2. Complexity of Games for Policy and Decision Making
As mentioned in Section 1, games for P&DM:
(1) Accommodate contextual knowledge, in the sense that insights from a speciﬁc
scenario cannot (usually) be generalised,
(2) Beneﬁt the principal and not the participants, in the sense that the principal
has a question that needs to be answered, and
(3) Usually are open in terms of freedom participants have to explore diﬀerent al-
As a result, these three characteristics inﬂuence the following four phases in game
development and usage:
•Design is inﬂuenced by all three characteristics. The particular scenarios, the
background of the participants and of any other involved stakeholder, and the
freedom participants have within the game, dictate speciﬁc design choices. More
details on these relationships can be found in Section 3.
•Validation is primarily inﬂuenced by the degree of freedom participants have.
More freedom, or a more open game, usually translates to a less known set of
outcomes, which in turn means it is more challenging to formally validate the
game. More details on these relationships can be found in Section 4.
•Game sessions are inﬂuenced by all three characteristics. In Section 5, results
from two rounds of interviews with game stakeholders, game designers, and fa-
cilitation experts are presented that show how game sessions in general and
debrieﬁng in particular are inﬂuenced by these three characteristics, but also
how people within the community have opposing opinions.
•Knowledge Management is inﬂuenced by the quality and type of knowledge as
well as from the knowledge beneﬁciary, who is the potential user of a knowl-
edge management system. More details on these relationships can be found in
In addition to the inﬂuence of the three characteristics, these four phases also in-
ﬂuence each other. Game design choices can positively aﬀect or inhibit validation
depending on how the real system is translated into a game. Validation increases a
game’s credibility, which in turn enables a game session to be more eﬀective; and vice
versa, a game session that is considered “successful” increases the games credibility
and applicability, thus making it more valid. Finally, knowledge management is the
“umbrella” that covers all aspects of games. As such, the methodologies associated
with it should be tailored according to the design, validation, and game sessions, and
should also take into account the complexity of games in general and of these phases
Several types and/or levels of complexity have been proposed aiming at capturing,
studying, and addressing the complexity pertaining to decision-making processes. On
the one hand, there is the so-called “objective” complexity, also known as system com-
plexity ( ¨
Ozg¨un and Barlas, 2015), rooted in the engineering sciences (Hughes, 1986).
On the other hand, there is the “subjective” complexity ( ¨
Ozg¨un and Barlas, 2015),
derived from the social sciences (Thissen and Walker, 2013), and observed in decisions
(Van Bueren et al., 2003), actors (de Bruijn and Herder, 2009), and institutions (Van
Bueren et al., 2003). While distinguishing between the “objective” and “subjective”
complexity gives insights in the diﬀerent elements of a process, it does not resolve the
actual complexity of the process. The real complexity is represented by the interde-
pendencies within and between the complexity levels (de Bruijn and Herder, 2009; Liu
and Li, 2012). In this paper, we distinguish three types of complexity: technical, actor,
and context complexity.
The system under study can be viewed along three lines:
•Functionally organised in aspect systems (Veeke et al., 2008), each of which
deﬁnes the main responsibilities for the actors related to that aspect.
•Geographically organised, in which case the system is divided into subsystems,
each corresponding to a region.
•Hierarchically organised by distinguishing between, e.g., the operational and
The technical complexity of the system depends on the number of technical changes
or uncertainties, but also on how the system is viewed. A change in one aspect of the
system usually requires alignment with one or more of other aspects and subsequently
for many subsystems to adapt, which in turn inﬂuences both the operational and
Decision-making processes for complex systems usually involve multiple actors with
diﬀerent perspectives and interests. These actors are not necessarily hierarchically
organised, but often they are mutually dependent. As a result, they form a network of
interdependencies. In such a network, the course of the decision-making depends on
the behaviour of and interactions between these actors (de Bruijn and ten Heuvelhof,
2008). This results in an often messy, spaghetti-like interaction structure. Moreover,
the formal organisational structures are often hierarchical, which might give some
actors a special position, making the decision-making process even more diﬃcult.
During the process of decision-making, both the network of actors involved and the
content of problems and solutions might change over time. This dynamic behaviour is
for a large part the result of many interdependencies (e.g., a change in one regional
subsystem has eﬀect on the national system, a change of actor’s A behaviour might
impact the behaviour of actor B, etc.). Moreover, decision-making processes are al-
ways impacted by unforeseen external developments such as political decisions, media
attention, and technical innovations.
Based on the analysis above, it becomes evident that complexity is not just the result
of the increased size of systems, but is mainly caused by the numerous interdependen-
cies. Even when these interdependencies are abstracted to a certain degree, they still
bear a signiﬁcant amount of complexity, which needs to be translated into game de-
sign choices. The resulting games for P&DM are therefore characterised by numerous
and complex interaction structures for which researchers and practitioners have only
limited knowledge on how to understand and model them. In Table 2, the four phases
of game development and usage are shown in relation to the three levels of complexity
Table 2. The diﬀerent complexities in each game phase.
Game Phases Complexities
Technical Actor Context
Analytical science Behavioural science Design science
←− Game theory (See Section 3) −→
Simulation layer Game layer Simulation & Game layer
(See Section 4) (See Section 4) for the speciﬁc context
(See Section 4)
Open & Closed games Participants & Principals Contextual & Generalisable
Diﬀerent background knowledge
Explicit knowledge Tacit knowledge Context communicated
(See Section 6.1) (See Section 6.1) through Personalisation
(See Section 6.2)
Since the early days of game usage, there have been attempts to deﬁne and formalise
game design. The vast majority of research has focused on the educational capabilities
of games, and only a handful of researchers have proposed approaches for formalising
the design of games for P&DM.
Duke (1974) proposed the use of conceptual maps combined with precise docu-
mentation of the design process. Such maps have the ability to ensure the games’
correspondence with reality, ascertain that the appropriate level of abstraction is be-
ing adopted, and conﬁrm that the corresponding proposals can be implemented in the
game design. Harteveld (2011) discussed about balancing reality, meaning, and play
in game design. For each of these three pillars, he proposed several ways to implement
them successfully within a game. Harteveld (2011) implicitly utilises game theory in
several ways, but not fully. The goal of the proposed framework in this section is to
build upon his work, and more speciﬁcally to further develop the ﬁrst pillar of its
Triadic Game Design approach, i.e. reality.
Only a few, yet promising, attempts have been made towards explicitly utilising
game theory in game design. Game theory does come with its limitations regarding
the modelling of systems’ complexity (Klabbers, 2018), the most important of which
is the over-simpliﬁcation of situations (e.g. 2 actors and 2 strategies), which in turn
leaves out contextual elements and does not capture the dynamics of the process (Ben-
nett, 1987). However, there have been various attempts to overcome this impediment,
one of which is including the beliefs of actors (Bacharach, 1994). The notion of “Game
Concepts” includes this broader approach, therefore game theory should not be dis-
missed altogether, as it can give insights on several aspects of systems, such as actors’
interaction and (strategic) behaviour. Indeed several researchers consider game the-
ory to be a useful tool for game designers (Bolton, 2002; Guadiola and Natkin, 2005;
Mader et al., 2012; Ritterfeld et al., 2009; Salen and Zimmerman, 2004; Skardi et al.,
2013; Sterman, 1989), due to its ability to structure a real-world process, thus allowing
the analysis of diﬀerent scenarios and providing a perspective of the possible actions.
Perhaps the most in depth approach is by Salen and Zimmerman (2004) in which they
explicitly use elements from game theory, like utility functions, strategies, and pay-oﬀ
matrices, in game design.
In this section, a framework for modelling and translating a real system into a game,
using game theory, is proposed. The research gap the framework aims to address is the
limitation in methodologies for identifying the problematic areas of systems that could
beneﬁt from games (Duke, 1980). The main assumption on the reason that this is not
currently feasible is that game design is more artistic rather than scientiﬁc (Schell,
The advantages of the proposed framework are twofold. On the one hand, the utilisa-
tion of game theory addresses problems related to game design, such as time constrains,
cost, required experience, and mistakes in modelling. On the other hand, formalisation
of game design facilitates the understanding of the intrinsic complexities of systems
and games, as analysed in Section 2, which in turn enables the use of formal methods
in validating games.
The proposed framework consists of: i) a methodology for abstracting the Real System
and describing it through one or more Game Concepts, derived from game theory;
and ii) a list of Game Concept elements and, linked to it, the corresponding list of
game design decisions. Establishment of the links is attempted through the use of the
characteristics of the Game Concepts (actors, strategies, issues, etc.) and the diﬀerent
game design decisions (scenarios, goals, etc.).
The framework is depicted in Figure 1 and contains ﬁve blocks:
•The Real System represents the real-world system that the game aims to imitate.
The Real System contains actors operating in and on the system, as well as dy-
namics created by the interaction between the system and the actors. Depending
on its complexity, the system can be characterised as either a complex adaptive
system or a socio-technical system.
•The Game Concepts contain characteristics from the toolbox called Game The-
ory (Osborne and Rubinstein, 1994) representing the game elements of the Real
System under study. Game Concepts describe the interaction between and be-
haviour of actors who have to make a decision (Bekius et al., 2018). Some
Game Concepts are mathematically deﬁned (e.g. Prisoners Dilemma (Rasmusen,
2007)), while others have only been observed empirically (e.g. a Multi-Issue game
(de Bruijn and Herder, 2009)).
•The Gaming Simulations represent the game design decisions used in modelling
the Real System, after taking into account the complexity of the system the
game is being designed for.
•The Characterisation of Real System into Game Concepts is the ﬁrst step in the
methodological process. The resulting Game Concepts should enable identiﬁca-
tion of the problematic areas and worst-case scenarios within the system.
•The Links between Game Concepts and games is the second step in the method-
Figure 1. Framework for characterising the Real System and linking the Game Concepts to game design
ological process. This is the part that is more directly connected with Harteveld
(2011) and with his Triadic Game Design, since it is the one that eventually
leads to game design recommendations.
The dashed arrow represents the game design literature as of to date, thus mak-
ing the contributions of this framework even more explicit. The direct link from the
Real System to the Game Concepts shows that game design is usually based on the
experience of game designers and rarely based on formal methods.
The methodology consists of two parts, the rectangles in Figure 1: Characterisation
and Links. Due to space limitations in this paper, each part is described brieﬂy though
more details can be found in the provided references.
With regards to the Characterisation, the taxonomy of Game Concepts, which is
described in more detail in Bekius and Meijer (2018), originates from both formal game
theory and public administration, where the concept “game” has a richer and more
descriptive deﬁnition. The characteristics of Game Concepts therefore vary between
being empirically substantiated and mathematically proven. The criteria used to de-
sign the taxonomy, which originate from theory on complex real-world decision-making
processes (de Bruijn and ten Heuvelhof, 2008; Koppenjan and Klijn, 2004; Teisman
and Klijn, 2008), are important for selecting the right Game Concepts. Multiple ac-
tors are usually involved in these processes, forming a network of interdependencies.
Hierarchical relations can exist within those networks most frequently between two
actors (Bekius and Meijer, 2018). With regard to games, the game theory notions help
analysing the situation and “predicting” worst-case scenarios. Since such scenarios are
undesirable, the ability to identify them in advance can be particularly helpful when
making game design decisions.
With regards to the Links, which can be found in detail in Roungas et al. (2019),
for the Game Concepts’ characteristics, a list of 16 Game Concept elements, based on
de Bruijn and ten Heuvelhof (2008); Rasmusen (2007); and Osborne and Rubinstein
(1994), is used as a starting point. Whereas for the game decisions, additional literature
is used in order to adapt and enhance the list of game elements for educational games,
compiled by Roungas and Dalpiaz (2016), to ﬁt games for P&DM. In addition to
literature, two games from the Dutch railways were used to validate the corresponding
lists (Roungas, 2019).
3.3. Case Studies
The proposed framework was applied to three case studies. From the case studies, two
were from the Dutch railways and also ﬁnished projects, and the third was from the
Swedish healthcare system and is still an on-going project. The ﬁrst two case studies
were used in order to further validate the applicability of the framework, whereas the
third one to test it on a future project.
The results from the case studies showed several areas that could improve from
the application of the framework (Roungas et al., 2019). Speciﬁcally for games on an
operational level, the framework indicated that participants should not only be from
the operational layer of the organisation but also from management, thus engaging
the actual decision-makers in the process. The framework was also able to provide a
balance between complexity and realism by avoiding over-complex situations, where
there are multiple issues per actor, which might also be conﬂicting, thus creating
a realistic game while maintaining complexity at a reasonable level. Finally, in the
last case, the framework was able to pinpoint the worst-case scenario in a quick and
formal way, whereafter a game can be used to further explore and perhaps prevent
bad scenarios from happening.
3.4. Final Remarks on the Design Framework
The framework proposed in this section shows promising results with regards to ad-
dressing several problems related to modelling games. In addition to addressing those
problems, a formalised approach to game design, such as the one proposed in this
section, enables the use of formal validation methods. In turn, the application of for-
mal validation methods allows for veriﬁable scientiﬁc results as opposed to the current
empirical and perhaps biased assessments of experts.
Unlike pure simulations, games have a distinct characteristic, which is the human par-
ticipation, or in other words games have a Game Layer on top of the Simulation Layer.
Game validation, due to its nature of including humans, usually depends more on the
subjective opinion of experts (van Lankveld et al., 2017), e.g. using questionnaires,
than formal methods. This limitation is related to the lack of design methods for
games as well as to the usually low number of participants. The former was analysed
in Section 3. The latter, i.e. the sample size, plays a signiﬁcant role on the applicability
of game results. A small sample size is easy to obtain, but has limited possibilities for
deriving analytical conclusions, and thus provides limited possibilities for generalis-
ing the observations from the game. A large sample size, while solving the analytical
problem and the generalisability of the results, is usually too expensive to obtain and
also diﬃcult to coordinate.
Validation of the Simulation Layer has been vastly researched through in the last
three decades (Balci, 1998, 2004; Sargent, 1996), where numerous formal methods and
statistical techniques have been introduced. Moreover, methodologies for ﬁrst verifying
that indeed the sample size is small (Lenth, 2001), then selecting the most appropri-
ate validation methods and statistical techniques among the numerous existing ones
(Roungas et al., 2018d), and ﬁnally automating validation (Roungas et al., 2018e)
have been proposed. Furthermore, for games in which participants do not need to be
physically present, technology can be used to reach a greater audience, thus increasing
the sample size (Katsaliaki and Mustafee, 2012).
Validation of the Game Layer, however, due to its nature of including uncertainties
pertaining to human activity, is usually not so straightforward. The formalisation of
game design can provide more structure on game validation, as analysed in Section 3.
With regards to the sample size, the Game Layer would beneﬁt from knowledge man-
agement, analysed in Section 6, in the sense that the more game sessions are conducted
the more evidence of a system’s behaviour are discovered and the cumulative sample
size gradually becomes large enough to generalise the outcome of the game.
The aim of this section is not to propose one particular methodology for game val-
idation but rather to pinpoint that in most cases, game validation is not as straight-
forward as the validation of simulations. While so far the analysis of game validation
referred to the total set of games, the focus of this paper is on P&DM games, which
need additional validation compared to games for teaching and training. Games for
P&DM, particularly those imitating engineering systems, have a very dominant Simu-
lation Layer. Therefore, these games should, and usually do, rely heavily on analytical
methods. Still, since games, in general but also for P&DM, depend signiﬁcantly on
contextual and behavioural factors as well as on how the actual game is executed,
the brieﬁng, game session, and debrieﬁng are of tremendous importance as well. Vali-
dation and game sessions have a reciprocal relationship. Increased validation is more
likely to lead to a fruitful and more successful game session, and a successful game
session boosts the game outcome and thus further increases its validity. But then the
question that rises is: How is a successful game session ensured, particularly in games
5. Game Sessions
Game sessions consist of three phases: brieﬁng, gameplay, and debrieﬁng, with the
latter being considered the most important feature of games (Crookall, 2010). Never-
theless, their almost completely synthetic nature raises the question: are game sessions
in general and debrieﬁng in particular performed and analysed in a rigorous scientiﬁc
way? In other words, are they consistently structured, given the diﬀerent character-
istics of games, and is it also clear what would constitute a successful game session
and debrieﬁng? The answer to all these questions is no (Roungas et al., 2018a). The
reason for this negative outcome is that expertise regarding game sessions and debrief-
ing resides almost entirely in the tacit knowledge spectrum. As a result, knowledge
and best practices on how to conduct fruitful game sessions and debrieﬁng are either
disseminated without understanding the causes for success, or not disseminated at all
(Roungas et al., 2018b). Hence, the aim of this section is make this tacit knowledge
of experts more explicit, and to gain understanding on why certain practices are more
prone to success than others. In order to accomplish this goal, two rounds of interviews
The ﬁrst round of interviews was with 19 experts of which 7 game designers, 6
project leaders, 4 game participants, and 2 department managers. The inclusion cri-
terion for the interviewees was that they should have been stakeholders in at least
two games within the last 5 years, in order to have a recent and holistic opinion. The
primary tool for analysis was Q-methodology, while at a later stage Principal Com-
ponent Analysis (Groth et al., 2013) and K-means clustering (Likas et al., 2003) were
used for further validating the results. In the Q-methodology, the results from the ﬁrst
four interviewees were used to build the q-sort statements, which the remaining 15
interviewees used. The results, shown in Table 3, revealed several factors that either
boost or inhibit games’ success.
Table 3. Results from ﬁrst round of interviews using the Q-methodology (Angeletti, 2018).
Factor Impact Comments
Presence of a game manager + A person who would attend all game-related procedures was
found to be beneﬁcial. These procedures involve choosing par-
ticipants, making these participants available on the day of
the game, managing missing players, taking care of the space
and the infrastructure for the gaming session, to name a few.
Managerial guidance and in-
+ The involvement of mid/high level managers made the partic-
ipants feel that what they are doing during the game session
matters and it is not just a game.
Structured and concrete re-
+ While the limitation of analytical sciences have been pin-
pointed in this paper, complete absence of it is also detri-
mental. Apart from the lack of robust scientiﬁc methods for
evaluating certain results, the absence of quantiﬁable results
was found to be diminishing the credibility of the game itself.
Strict rules + Stricter rules were perceived by the interviewees as an insur-
ance of higher validity of results.
High variety of roles involved
in game design
+ Involvement of stakeholder not just during the game but also
during the design process was appreciated by the interviewees,
especially from operational personnel.
Simulator validated before-
+ Not properly validated software has created frustration among
the stakeholders and negative opinion about the game overall.
Structured debrieﬁng + Particularly for games for P&DM, an unstructured open dis-
cussion after the game was found to often distract from the
goal of the game.
High complexity of the games
−Due to time and budget restrictions, over-complex games
should be avoided, in order for results to be obtained in an af-
fordable and timely manner. Moreover, complex environments
tend to overwhelm the participants causing the opposite eﬀect
from the desired one.
Unexpressed and/or conﬂict-
ing stakeholders interests
−Unexpressed interests and expectations were found to severely
increase the risk of unanswered research questions and unclear
Time pressure −Time pressure was recognised as a factor that forces untested
or not well tested simulators to be used in game session that
often causes crashes in the software leading to negative appre-
ciation on behalf of the participants and potentially invalid
Pressure from external actors
(for obtaining a solution suit-
able to their interests)
−Some stakeholders might put pressure on the game design-
ers or facilitators to obtain results that ﬁt their interests and
agenda, which in turn can cause conﬂicts among the stake-
holders, and potentially invalid results.
The application of Principal Component Analysis was inconclusive, while the K-
means clustering showed similar results with the Q-methodology, thus further validat-
ing the ﬁndings.
The second round of interviews was with 21 game facilitation experts, all of whom
were members of ISAGA and having more than 15 years of expertise. This round of
interviews was mainly characterised by the contradicting answers for almost all ques-
tions. This result translates to a non-uniﬁed approach towards games in general and
debrieﬁng in particular. The complexity characterising modern systems, as it was ex-
amined in Section 2, immediately excludes pure analytical methods as the absolute
and only solution, as the probability for ludic fallacy (Taleb, 2004) increases signiﬁ-
cantly. Therefore, these interviews aimed to provide insights on how facilitation experts
approach debrieﬁng, and to tap into their tacit knowledge.
The questions these interviews intended to address were:
(1) Given the limitation of analytical methods to provide clear criteria for success
of game sessions, how should success be deﬁned?
(2) What is the level of knowledge of clients regarding their goal using games and
how should they be prepared prior to the game session?
(3) How do facilitators adapt their approach to the game session based on the play-
The ﬁrst question yielded perhaps the most answers with regards to how experts
deﬁne success. 21 interviews resulted in more than 10 diﬀerent answers, conﬁrming
the lack of consistency in the ﬁeld. Nevertheless, three answers were far more common
than the others. Freedom and feeling safe to share your experience from the game
was considered a factor of paramount importance provided by six experts. The second
most frequent criterion for success was the degree to which players would actually
implement the lessons learned during the game in their work. Finally, a success factor
acknowledged particularly by game designers, was the level of involvement of players
and their desire to play the game again.
The ﬁrst part of the second question was initially expected to be answered over-
whelmingly positive, but it turned out that clients often want to build a game but
without knowing the actual goals. For the second part of the question, facilitators
should manage the varying levels of awareness of clients, where facilitators inform the
clients about the possible unpredictable results of open games, like games for P&DM.
The third question relates back to theory, where the interchanging roles that facil-
itators can, and should, take during a game, was introduced (Kriz, 2010). The ﬁrst
step for facilitators is to identify any knowledge gap of the players with regards to the
game they will participate in. Then, when the participants feel safe enough during the
debrieﬁng, the facilitator should capitalise on that by taking the conversation into a
deeper level. It should be noted that the interviewees acknowledged the inﬂuence of
particular debrieﬁng methods but none stood out as being most eﬀective or preferred.
The two sets of interviews, analysed in this section, provide “inside” information on
best practices when conducting game sessions and subsequently on debrieﬁng. While
analytical methods can provide invaluable insights when quantitative variables are
available, the kind of knowledge provided in this section can only be attained by
interviewing experts and then properly interpreting the results.
6. Knowledge Management
Knowledge management (KM) and reuse of games is not, and should not be, of aca-
demic interest only. The eﬀectiveness of a corporation depends heavily on how it
manages and reuses knowledge (Markus, 2001), or in layman terms, how it obtains
and thereafter maintains the so-called “know-how” (Roungas et al., 2018c). As a cor-
poration acquires and builds up on knowledge obtained through games, it improves
its know-how, and thus sustains or even increases its competitive advantage (Dixon,
Despite the fact that games have proven to be cost eﬀective, on multiple occasions
they still involve a substantial ﬁnancial cost (Michael and Chen, 2005). Moreover, time
is required to process the game outcomes and come up with the best possible business
decision. This additional time does not only increase the accrued costs but also delays
decisions that sometimes are time-sensitive. All of the above combined with the lack of
a comprehensive methodology for managing and reusing knowledge acquired through
games, result in organisations, researchers, and game practitioners to “reinvent the
wheel” by conducting consecutive and (almost) identical game sessions, accompanied
by data analysis. The motivation for this study is therefore triggered by our strong
belief that the capturing, compilation, maintenance, and dissemination of knowledge
requires a methodology that will maximise the game outcomes concurrently with the
minimisation of the associated costs and risks.
While there is a lack of literature in the area of KM of games, existing literature
in the general area of KM creates a pathway towards KM of games. Therefore, based
on this literature, which is illustrated in the forthcoming subsections, a knowledge
management framework (KMF) is proposed. The KMF consists of several building
blocks, each of which refers to a diﬀerent aspect of the knowledge management sys-
tem (KMS) and/or the organisation. These building blocks are the Type of Knowledge
(Section 6.1), the Strategy, Purpose, and Users of KMS (Section 6.2), and the Organ-
isational Culture. An illustration of the KMF is shown in Figure 2.
The latter, i.e. organisational culture, has a reciprocal relationship with KM. On
the one hand, the cultural values within an organisation inﬂuence the way people
experience the KM outcomes and force the underlined KMS to evolve (Alavi et al.,
2005). On the other hand, KM shapes the organisational values (Alavi et al., 2005) and
improves the organisational performance through the development of human capital
(Hsu, 2008). As a result, the culture of an organisation with regards to KM deﬁnes the
potential eﬀectiveness of KM. Nevertheless, due to the scope and the size limitations
of this paper, organisational culture is not further analysed, yet acknowledged as a
crucial element of knowledge management.
6.1. Type of knowledge
Knowledge can be deﬁned in a number of ways. One of the most widely used deﬁnition
is the distinction between explicit and implicit, the latter also known as tacit, knowl-
edge (Smith, 2001). According to this classiﬁcation, explicit knowledge is considered
to be data or information that is communicated in a formal language and/or digitally
or printed information that can be shared, such as manuals. On the other hand, tacit
knowledge focuses on the cognitive features of humans, such as mental models, beliefs,
insights, and perceptions.
6.1.1. Explicit knowledge
Explicit knowledge produced in and from games can be of quantitative or qualitative
nature. There are four phases, i.e., sources of explicit knowledge, in a game’s lifecycle.
The ﬁrst phase concerns game requirements. Although requirements are usually
KMS Str ategy
Purp ose of KMS
Improvement Cross Project
Type of K nowledg e
Knowledge Implicit - Explicit
Users of KMS
Analysis Agile Design Knowledge
Purp ose of KMS
Figure 2. An illustration of the knowledge management framework.
considered to be relevant only for the game they are elicited for, according to Zave
(1997), requirements engineering is also concerned with the evolution of the relation-
ships among the functions and the constrains of a system. As such, requirements
immediately become a tool for knowledge reuse, as they provide a common ground for
comparing diﬀerent systems and pointing to similarities between games. These sim-
ilarities can be used either to improve future game development, as domain speciﬁc
knowledge (Callele et al., 2005), or to reuse the outcome of previously created games
to analyse a current issue.
The second phase is the game design. From a KM perspective, game design is con-
cerned with the proper structure and documentation, which, in turn, can determine
whether the new game is actually required or not, and thus whether previously ob-
tained results can be used with minimal resources (Roungas et al., 2018c).
The third phase is validation. From a KM perspective, validation is concerned with
meticulously documenting the validation process and has a twofold beneﬁt for stake-
holders: i) they can ascertain, with rather minimal eﬀort, whether the results of the
game can be used for the intended purpose, and ii) they can, again with much less
eﬀort, perform their own validation study and hence, use the game for slightly or
completely diﬀerent purposes (Roungas et al., 2018c).
The fourth phase is game sessions, which can be seen as a game instantiation. In
object oriented programming terms (Rentsch, 1982), the game can be seen as a class
with the rules and general guidelines of how the game works, whereas the game session
can be seen as an instance of this class. A game is usually designed once (involving
several iterations) but can be played multiple times with a similar or a completely
6.1.2. Implicit/Tacit knowledge
Unlike explicit knowledge, tacit knowledge is not so straightforward to capture and
manage. A database and a ﬁlesystem most probably would not be adequate to tackle
the underlying challenges. Therefore, diﬀerent methodologies, which might also result
in diﬀerent approaches with regards to the implementation of the KMS, are needed.
Although literature is not exhaustive on how to capture and manage tacit knowledge,
and how to convert this tacit knowledge in to explicit knowledge, several approaches
have been proposed. Some of the most common techniques are:
•Causal Maps: Causal maps are interpretations of individuals’ or groups’ beliefs
about causal relationships (Mark´ıczy and Goldberg, 1995). Causal maps have
been proven to be an eﬀective tool for the elicitation of tacit knowledge for a
variety of reasons, e.g., allowing to focus on action, eliciting context dependent
factors etc. (Ambrosini and Bowman, 2001).
•Semi-structured interviews: While the purpose and structure of such an inter-
view is predetermined, the essence of the “semi-structure” lies on the fact that
interviewees are encouraged to answer questions by telling stories (Ambrosini
and Bowman, 2001). The story telling nature of these interviews allows people
to manage the collective memory of an organization (Boje, 1991), frame their
experiences (Wilkins and Thompson, 1991), and reﬂect on the complex social
web of an organization (Brown and Duguid, 1991).
•Q-methodology: In a nutshell, in Q-methodology the interviewee sorts a series
of items/statements throughout a continuum (e.g. from strongly disagree to
strongly agree) that is approximately normally distributed, in the sense that
more of these statements are placed close to the neutral area than in the two
edges of the continuum. An brief example of Q-methodology was shown in Sec-
tion 5, where more detail is provided by Angeletti (2018).
•Metaphors: Various scholars argue that the use of metaphors can serve to trans-
mit tacit knowledge (Ambrosini and Bowman, 2001; Martin, 1982) and since
metaphors allow diﬀerent ways of thinking, people may be able to explain com-
plex organizational phenomena (Tsoukas, 1991). The term “metaphor” indicates
the transfer of information from a relatively familiar domain to a relatively un-
known domain (Tsoukas, 1991).
•Social media: The most ancient form for exchanging knowledge in general (Gur-
teen, 1998), and tacit knowledge in particular, is the use of dialog. This is perhaps
why social media have become prominent on how people interact not only at a
personal but also at a professional level. While research is still relatively scarce
in this area, the use of social media sounds indeed promising for tacit knowledge
sharing, since it encompasses interactive and collaborative technologies (Panahi
et al., 2012).
6.2. Strategy, Purpose, and Users of KMS
The Strategy, Purpose, and Users of a KMS are crucial building blocks for KM and
deﬁning the structure of a KMF. An introduction to these elements is provided below.
By looking into management consulting ﬁrms, Hansen et al. (1999) distinguished two
KM strategies, which in turn heavily inﬂuence the ﬁnal implementation of the KMS.
These strategies are called Codiﬁcation and Personalisation.
Codiﬁcation stores any acquired knowledge and makes it available for reuse. Thereby
it is isolated from its source, and should be preferred when people want to learn from
past projects and apply this knowledge in the future (secondary knowledge miners)
Personalisation is the exchange of knowledge that has been acquired in the past
through one-to-one conversations and brainstorming sessions; it is a way to promote
discussion and exchange of ideas and knowledge between people in a more personal
manner, and should be preferred when people can beneﬁt from experts’ opinions
(expertise-seeking novices) (Markus, 2001).
There are various reasons for which an organisation would want to build a KMS. The
most common ones are root-cause analysis, own project improvement, cross-project
improvement, and network improvement.
Root-cause analysis is concerned with the establishment of strict and precise proto-
cols to be in place, in order to help with the examination of problems or failures that
might occur throughout the lifecycle of a game or due to decisions made based on a
game (Latino et al., 2016).
Own project improvement is concerned with the utilisation of the knowledge ac-
quired during the lifecycle of a game to improve the game itself (Cockburn, 2006)
and/or the project for which the game was built (Roungas et al., 2018c).
Cross-project improvement is concerned with the utilisation of the knowledge ac-
quired during the lifecycle of a game to improve other game projects, current or future.
A KMS can inﬂuence a current or future project either explicitly, by directly apply-
ing the acquired knowledge in another project, or implicitly, by creating added value
(Spender, 2008), perhaps even a paradigm shift, within the organisation, which changes
its modus operandi, and consequently inﬂuences any game project thereafter.
Network improvement is concerned with the utilisation of the KMS to strengthen
the relationships of individuals and teams within an organisation, especially in large
organisations, by bringing awareness of the totality of knowledge possessed within.
Regardless of its type and purpose, the primary function of a KMS is to manage and
disseminate knowledge to people, i.e. users. Therefore, users are at the centre of a
KMS and any frameworks aiming at building a KMS should put users ﬁrst. While
there might be several stakeholders in a KMS, there are three main categories of users
involved, knowledge producers, knowledge intermediaries, and knowledge consumers.
Knowledge producers are deﬁned as the people that contribute their knowledge
to the KMS. Incentives should be provided, in order for the knowledge producers to
frequently and eﬀectively share their knowledge and expertise. Moreover, knowledge
producers should be experts in their respective ﬁeld, since a person who aims at using
knowledge previously acquired shall be conﬁdent of the expertise of the knowledge
producers, and thus trust their respective ﬁndings (Watson and Hewett, 2006).
Knowledge intermediaries are the people that manage the knowledge, by indexing,
summarising, and objectifying (to the extent that this is possible and appropriate).
Knowledge consumers are the end users of the KMS, thus the ones that beneﬁt
from it. Depending on the type and the purpose of the KMS, knowledge consumers
can be the game designers, project managers, investigators, researchers, and even the
participants of a game.
6.3. Application of KMF
The KMF has been applied to three case studies in the Railway Sector for further
validation, two in The Netherlands and one on the European level (Authors, 2018).
The analysis of the case studies reveals that the proposed KMF can cover the major-
ity of knowledge generated in and around these games. However, the implementation
of such a KMF into a fully functional KMS seems, and usually is, labour intensive.
Nevertheless, it is evident both from theory and from the case studies examined that
games produce diﬀerent types and quality of knowledge. Particularly, the games that
are part of the three case studies are designed for testing changes in the railway in-
frastructure, resulting in a strong focus on the debrieﬁng after each game session. In
turn, debrieﬁng becomes the primary source of knowledge, especially for tacit knowl-
edge. Hence, capturing all knowledge from games gives new opportunities for validity
assessments at a higher level of detail, which both complements and puts pressure on
the current sense-making approaches (van den Hoogen et al., 2014).
6.4. Final Remarks on the KMF
Knowledge is the prime component of any KMS, hence it holds the lion’s share when
analysing games. Nevertheless, just knowledge is not enough to build a KMS; the
strategy, purpose, and potential users of the KMS should also be understood and taken
into account. In eﬀect, the purpose and the users of the KMS heavily inﬂuence how
knowledge from games is captured, stored, and disseminated. Moreover, the purpose
of the KMS deﬁnes its potential users and particularly the knowledge consumers.
The proposed framework provides general guidelines on the components to consider
for the development of a KMS. Speciﬁc details on how to develop the KMS are de-
pendent on the organisation culture itself and, as mentioned above, the users that
support and use the KMS. Moreover, the purpose for which an organisation builds
a KMS depends heavily on its maturity with regards to knowledge management. In
this context, “mature” means that the organisation has the “know-how” of managing
knowledge, which allows it to follow a top-down approach on the design of the KMS,
thus starting by ﬁrst deﬁning the purpose and then gathering the required data. As a
contrast, “immature” means that the organisation follows a bottom-up approach on
the design of the KMS, thus starting by ﬁrst gathering data, and then deﬁning the
purpose of the KMS based on the quality of the knowledge produced from the acquired
This paper started by clarifying the epistemology that governs games for P&DM.
Then, through literature review, gaps in these games were identiﬁed, and solutions
were proposed to address them. Four areas were identiﬁed where gaps exist within
game development and usage: design, validation, game sessions, and knowledge man-
agement. For design, a framework for formalising game design based on game theory
was proposed. Further research could focus on the application, ﬁne-tuning and vali-
dation of the framework in domains other than the railways. For validation, the addi-
tional steps needed for the validation of P&DM games were acknowledged, especially
the validation of the Simulation Layer in relation to the Game Layer, for which em-
pirical and analytical methods should be combined. For the game sessions, interviews
with experts identiﬁed a clear list of “do’s and don’ts” for the success of games for
P&DM. For knowledge management, a framework for the management of knowledge
produced by and in games was proposed. A next step would be the implementation
of the framework into a full scale knowledge management system to ﬁne-tune and
validate the framework as well as demonstrate its operational capabilities. In addition
to identifying the gaps in the four areas within game development and usage, a con-
nection between these areas was established, showing how they are intertwined and
thus aﬀecting one another. Each of these four areas and their interconnections show
the complexity of developing and using games and, as a result, the interdisciplinary
approach that this requires. The identiﬁed problems and subsequently the proposed
solutions vary from purely analytical to purely social, stressing the need for seamless
cooperation between the analytical and design communities.
Akkermans, H. A. and Van Oorschot, K. E. (2005). Relevance assumed: A case study of
balanced scorecard development using system dynamics. Journal of the Operational Research
Alavi, M., Kayworth, T. R., and Leidner, D. E. (2005). An empirical examination of the inﬂu-
ence of organizational culture on knowledge management practices. Journal of Management
Information Systems, 22(3):191–224.
Ambrosini, V. and Bowman, C. (2001). Tacit knowledge: Some suggestions for operational-
ization. Journal of Management Studies, 38(6):811–829.
Angeletti, R. (2018). Managing knowledge in the era of serious games and simulations: An
exploratory study on the elicitation of serious games’ requirements for the generation and
reuse of knowledge. Master’s thesis, Delft University of Technology.
Authors (2018). Guidelines for the management and dissemination of knowledge from gaming
simulations. Under Review.
Bacharach, M. (1994). The epistemic structure of a theory of a game. Theory and Decision,
Balci, O. (1998). Veriﬁcation, validation, and testing. In Banks, J., editor, Handbook of Sim-
ulation: Principles, Methodology, Advances, Applications, and Practice, chapter 10, pages
335–393. Engineering & Management Press.
Balci, O. (2004). Quality assessment, veriﬁcation, and validation of modeling and simulation
applications. In Ingalls, R. G., Rossetti, M. D., Smith, J. S., and Peters, B. A., editors,
Proceedings - Winter Simulation Conference, volume 1, pages 122–129, Washington, D.C.,
USA. Association for Computing Machinery.
Barnab`e, F. (2016). Policy deployment and learning in complex business domains: The poten-
tials of role playing. International Journal of Business and Management, 11(12):15–29.
Bazghandi, A. (2012). Techniques, advantages and problems of agent based modeling for traﬃc
simulation. International Journal of Computer Science Issues, 9(1):115–119.
Bekius, F. A. and Meijer, S. A. (2018). Selecting the right game concept for social simulation
of real-world systems. In 14th Social Simulation Conference, Stockholm, Sweden.
Bekius, F. A., Meijer, S. A., and de Bruijn, H. (2018). Collaboration patterns in the Dutch
railway sector: Using game concepts to compare diﬀerent outcomes in a unique development
case. Research in Transportation Economics, 69:360–368.
Bennett, P. G. (1987). Beyond game theory - Where? In Bennett, P. G., editor, Analysing Con-
ﬂict and its Resolution: Some Mathematical Contributions, chapter 3, pages 43–70. Claren-
don Press, Oxford.
Boje, D. M. (1991). Consulting and change in the storytelling organisation. Journal of Orga-
nizational Change Management, 4(3):7–17.
Bolton, G. E. (2002). Game theory’s role in role-playing. International Journal of Forecasting,
Brown, J. S. and Duguid, P. (1991). Organizational learning and communities-of-practice:
Toward a uniﬁed view of working, learning, and innovation. Organization Science, 2(1):40–
Callele, D., Neufeld, E., and Schneider, K. (2005). Requirements engineering and the creative
process in the video game industry. In Proceedings of the IEEE International Conference
on Requirements Engineering, pages 240–250. IEEE.
Cockburn, A. (2006). Agile software development: The cooperative game. Addison-Wesley, 2nd
Crookall, D. (2010). Serious games, debrieﬁng, and simulation/gaming as a discipline. Simu-
lation and Gaming, 41(6):898–920.
de Bruijn, H. and Herder, P. M. (2009). System and actor perspectives on sociotechnical sys-
tems. IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans,
de Bruijn, H. and ten Heuvelhof, E. (2008). Management in networks: On multi-actor decision
Dixon, N. M. (2000). Common knowledge: How companies thrive by sharing what they know.
Harvard Business School Press, Boston, MA, USA.
Duke, R. D. (1974). Gaming: The future’s language. Sage Publications, New York, USA.
Duke, R. D. (1980). A paradigm for game design. Simulation & Gaming, 11(3):364–377.
Grogan, P. T. and Meijer, S. A. (2017). Gaming methods in engineering systems research.
Systems Engineering, 20(6):542–552.
Groth, D., Hartmann, S., Klie, S., and Selbig, J. (2013). Principal components analysis. In
Methods in Molecular Biology, volume 930, pages 527–547. John Wiley & Sons, Inc.
Guadiola, E. and Natkin, S. (2005). Game theory and video game, a new approach of game
theory to analyze and conceive game systems. In Proceedings of CGAMES 2005 - 7th
International Conference on Computer Games: Artiﬁcial Intelligence, Animation, Mobile,
Educational and Serious Games, pages 166–170, Angoul`eme, France.
Gurteen, D. (1998). Knowledge, creativity and innovation. Journal of Knowledge Management,
Hansen, M. T., Nohria, N., and Tierney, T. J. (1999). What’s your strategy for managing
knowledge. Harvard Business Review, 77(2):106–116.
Harteveld, C. (2011). Triadic game design: Balancing reality, meaning and play. Springer
Science & Business Media.
Holland, J. H. (1992). Complex adaptive systems. Daedalus, 121(1):17–30.
Hsu, I.-C. (2008). Knowledge sharing practices as a facilitating factor for improving orga-
nizational performance through human capital: A preliminary test. Expert Systems with
Hughes, T. P. (1986). The evolution of large technological systems. In Bijker, W. E., Hughes,
T. P., and Pinch, T. J., editors, The social construction of technological systems, pages
51–82. The MIT Press.
Katsaliaki, K. and Mustafee, N. (2012). A survey of serious games on sustainable development.
In Laroque, C., Himmelspach, J., Pasupathy, R., Rose, O., and Uhrmacher, A. M., editors,
Proceedings of the 2012 Winter Simulation Conference, pages 300–312, Berlin, Germany.
Klabbers, J. H. G. (2009). Terminological ambiguity: Game and simulation. Simulation &
Klabbers, J. H. G. (2018). On the architecture of game science. Simulation & Gaming,
Koppenjan, J. and Klijn, E.-H. (2004). Managing uncertainties in networks. Taylor & Francis,
Kornhauser, A. W. (1934). The human problems of an industrial civilization. Psychological
Kriz, W. C. (2010). A systemic-constructivist approach to the facilitation and debrieﬁng of
simulations and games. Simulation & Gaming, 41(5):663–680.
Latino, R. J., Latino, K. C., and Latino, M. A. (2016). Root cause analysis: Improving perfor-
mance for bottom-line results. Taylor & Francis Group, fourth edition.
Lenth, R. V. (2001). Some practical guidelines for eﬀective sample size determination. The
American Statistician, 55(3):187–193.
Li, K., Zhang, Y., Guo, J., Ge, X., and Su, Y. (2018). System dynamics model for high-speed
railway operation safety supervision system based on evolutionary game theory. Concur-
rency and Computation: Practice and Experience, 31(e4743):1–10.
Likas, A., Vlassis, N., and Verbeek, J. (2003). The global k-means clustering algorithm. Pattern
Liu, P. and Li, Z. (2012). Task complexity: A review and conceptualization framework. Inter-
national Journal of Industrial Ergonomics, 42(6):553–568.
Macal, C. M. and North, M. J. (2010). Tutorial on agent-based modelling and simulation.
Journal of Simulation, 4(3):151–162.
Mader, S., Natkin, S., and Levieux, G. (2012). How to analyse therapeutic games: The
player/game/therapy model. In Herrlich, M., Malaka, R., and Masuch, M., editors, In-
ternational Conference on Entertainment Computing, pages 193–206, Bremen, Germany.
Springer Berlin Heidelberg.
Mark´ıczy, L. and Goldberg, J. (1995). A method for eliciting and comparing causal maps.
Journal of Management, 21(2):305–333.
Markus, M. L. (2001). Toward a theory of knowledge reuse: Types of knowledge reuse situations
and factors in reuse success. Journal of Management Information Systems, 18(1):57–93.
Martin, J. (1982). Stories and scripts in organizational settings. In Hastorf, A. H. and Isen,
A. M., editors, Cognitive Social Psychology, pages 255–305. New York: Elsevier.
Michael, D. R. and Chen, S. L. (2005). Serious games: Games that educate, train, and inform.
Thomson Course Technology PTR.
Morgan, J. S., Howick, S., and Belton, V. (2017). A toolkit of designs for mixing Discrete Event
Simulation and System Dynamics. European Journal of Operational Research, 257(3):907–
Osborne, M. J. and Rubinstein, A. (1994). A Course in game theory. The MIT Press.
Ottino, J. M. (2004). Engineering complex systems. Nature, 427(6973):399.
Ozg¨un, O. and Barlas, Y. (2015). Eﬀects of systemic complexity factors on task diﬃculty in
a stock management game. System Dynamics Review, 31(3):115–146.
Panahi, S., Watson, J., and Partridge, H. (2012). Social media and tacit knowledge sharing:
Developing a conceptual model. In World Academy of Science, Engineering and Technology
(WASET), pages 1095–1102, Paris, France.
Pruitt, D. G. and Kimmel, M. J. (1977). Twenty years of experimental gaming: Critique,
synthesis, and suggestions for the future. Annual Review of Psychology, 28(1):363–392.
Raghothama, J. and Meijer, S. (2018). Rigor in gaming for design: Conditions for transfer
between game and reality. Simulation & Gaming, 49(3):246–262.
Rasmusen, E. (2007). Games and information: An introduction to game theory (4th edition).
MA: Blackwell Publishing.
Rentsch, T. (1982). Object oriented programming. ACM SIGPLAN Notices, 17(9):51–57.
Ritterfeld, U., Cody, M., and Vorderer, P. (2009). Serious games: Mechanisms and eﬀects.
Roungas, B. (2019). An inquiry into gaming simulations for decision making. PhD thesis,
Delft University of Technology.
Roungas, B., Bekius, F. A., and Meijer, S. (2019). The game between game theory and gaming
simulations: Design choices. Simulation & Gaming.
Roungas, B. and Dalpiaz, F. (2016). A model-driven framework for educational game design.
In De Gloria, A. and Veltkamp, R., editors, GALA 2015 Revised Selected Papers of the 4th
International Conference on Games and Learning Alliance, volume 9599, pages 1–11, Rome,
Italy. Springer International Publishing.
Roungas, B., De Wijse, M., Meijer, S., and Verbraeck, A. (2018a). Pitfalls for debrieﬁng
games and simulation: Theory and practice. In Naweed, A., Wardaszko, M., Leigh, E., and
Meijer, S., editors, Intersections in Simulation and Gaming, pages 101–115. Springer, Cham,
Roungas, B., Lo, J. C., Angeletti, R., Meijer, S. A., and Verbraeck, A. (2018b). Eliciting
requirements of a knowledge management system for gaming in an organization: The role of
tacit knowledge. In 49th International Conference of International Simulation and Gaming
Association, Bangkok, Thailand.
Roungas, B., Meijer, S., and Verbraeck, A. (2018c). Knowledge management of games for
decision making. In Lukosch, H., Bekebrede, G., and Kortmann, R., editors, Lecture Notes
in Computer Science, volume 10825 LNCS, pages 24–33, Delft, The Netherlands. Springer.
Roungas, B., Meijer, S. A., and Verbraeck, A. (2018d). A framework for optimizing simula-
tion model validation & veriﬁcation. International Journal On Advances in Systems and
Measurements. In Press.
Roungas, B., Meijer, S. A., and Verbraeck, A. (2018e). Harnessing Web 3.0 and R to mitigate
simulation validation restrictions. In International Conference on Simulation and Modeling
Methodologies, Technologies and Applications, Porto, Portugal.
Roungas, B., Meijer, S. A., and Verbraeck, A. (2018f). The future of contextual knowledge
in gaming simulations: A research agenda. In Proceedings of the 2018 Winter Simulation
Salen, K. and Zimmerman, E. (2004). Rules of play: Game design fundamentals. MIT press.
Sargent, R. G. (1996). Verifying and validating simulation models. In Charnes, J. M., Morrice,
D. J., Brunner, D. T., and Swain, J. J., editors, Proceedings of the 1996 Winter Simulation
Conference, pages 55–64, Coronado, California, USA. IEEE Computer Society.
Schell, J. (2014). The art of game design: A book of lenses. New York: A K Peters/CRC Press,
Secchi, D. (2015). A case for agent-based models in organizational behavior and team research.
Team Performance Management, 21(1-2):37–50.
Secchi, D. (2017). Agent-based models of bounded rationality. Team Performance Manage-
Simon, H. A. (1957). Models of man: Social and rational. Wiley, 1st edition.
Skardi, M. J. E., Afshar, A., and Solis, S. S. (2013). Simulation-optimization model for non-
point source pollution management in watersheds: Application of cooperative game theory.
KSCE Journal of Civil Engineering, 17(6):1232–1240.
Smith, E. A. (2001). The role of tacit and explicit knowledge in the workplace. Journal of
Knowledge Management, 5(4):311–321.
Spender, J.-C. (2008). Organizational learning and knowledge management: Whence and
whither? Management Learning, 39(2):159–176.
Sterman, J. D. (1989). Modeling managerial behavior: Misperceptions of feedback in a dynamic
decision making experiment. Management Science, 35(3):321–339.
Taleb, N. N. (2004). No, small probabilities are “not attractive to sell”: A comment. Journal
of Behavioral Finance, 5(1):2–7.
Teisman, G. R. and Klijn, E.-H. (2008). Complexity theory and public management. Public
Management Review, 10(3):287–297.
Thissen, W. A. H. and Walker, W. E. (2013). Public policy analysis: New developments.
Springer US, Boston, MA.
Trist, E. L. and Bamforth, K. W. (1951). Some social and psychological consequences of the
Longwall method of coal-getting. Human Relations, 4(1):3–38.
Tsoukas, H. (1991). The missing link: A transformational view of metaphors in organizational
science. Academy of Management Review, 16(3):566–585.
Van Bueren, E. M., Klijn, E.-H., and Koppenjan, J. F. M. (2003). Dealing with wicked problems
in networks: Analyzing an environmental debate from a dealing perspective. Journal of
Public Administration Research and Theory, 13(2):193–212.
van den Hoogen, J., Lo, J. C., and Meijer, S. (2014). Debrieﬁng in gaming simulation for
research: Opening the black box of the non-trivial machine to assess validity and reliability.
In Tolk, A., Diallo, S. Y., Ryzhov, I. O., Yilmaz, L., Buckley, S., and Miller, J. A., edi-
tors, Proceedings of the 2014 Winter Simulation Conference, pages 3505–3516, Savannah,
Georgia, USA. IEEE Press.
Van der Zee, D.-J. and Slomp, J. (2009). Simulation as a tool for gaming and training in
operations management - A case study. Journal of Simulation, 3(1):17–28.
van Lankveld, G., Sehic, E., Lo, J. C., and Meijer, S. A. (2017). Assessing gaming simulation
validity for training traﬃc controllers. Simulation & Gaming, 48(2):219–235.
Veeke, H. P., Ottjes, J. A., and Lodewijks, G. (2008). The Delft systems approach: Analysis
and design of industrial systems. Springer London.
Wardaszko, M. (2018). Interdisciplinary approach to complexity in simulation game design
and implementation. Simulation & Gaming, 49(3):263–278.
Watson, S. and Hewett, K. (2006). A multi-theoretical model of knowledge transfer in organiza-
tions: Determinants of knowledge contribution and knowledge reuse. Journal of Management
Wilkins, A. L. and Thompson, M. P. (1991). On getting the story crooked (and straight).
Journal of Organizational Change Management, 4(3):18–26.
Zave, P. (1997). Classiﬁcation of research eﬀorts in requirements engineering. ACM Computing
Surveys (CSUR), 29(4):315–321.