Article

Agent-Based Computational Models and Generative Social Science

Wiley
Complexity
Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Agent-based computational modeling is changing the face of social science. In Generative Social Science , Joshua Epstein argues that this powerful, novel technique permits the social sciences to meet a fundamentally new standard of explanation, in which one "grows" the phenomenon of interest in an artificial society of interacting agents: heterogeneous, boundedly rational actors, represented as mathematical or software objects. After elaborating this notion of generative explanation in a pair of overarching foundational chapters, Epstein illustrates it with examples chosen from such far-flung fields as archaeology, civil conflict, the evolution of norms, epidemiology, retirement economics, spatial games, and organizational adaptation. In elegant chapter preludes, he explains how these widely diverse modeling studies support his sweeping case for generative explanation. This book represents a powerful consolidation of Epstein's interdisciplinary research activities in the decade since the publication of his and Robert Axtell's landmark volume, Growing Artificial Societies . Beautifully illustrated, Generative Social Science includes a CD that contains animated movies of core model runs, and programs allowing users to easily change assumptions and explore models, making it an invaluable text for courses in modeling at all levels.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... One option forward is to avoid generalising variables and instead seek lessons on the level of mechanisms (Tilly, 2001;Hedström & Ylikoski, 2010). These mechanisms can concatenate in complex ways to generate quite different, even idiosyncratic systems in time, but also provide portable insights into complex phenomena (Epstein, 1999). I would argue that particularly fruitful "mechanisms" for the sociopolitical world where we are ever concerned with agency in the face of complexity are those that might explain actors' choices (Hollway, 2020; see also Block et al., 2018), for they offer us a way of understanding and acting in a social world. ...
... Prediction, often touted as the gold standard of statistical models, is another aspect dependence touches upon. Epstein (2008) enumerates sixteen reasons other than prediction to build models. These include explanation and theory-testing (quite distinct from prediction), but also the illumination of micro-macro linkages, the suggestion of dynamical analogies, prompting new questions and guiding data collection or case selection, for pedagogical and public education purposes, and to discipline the policy dialogue by identifying risks, efficiencies, and trade-offs. ...
Article
Full-text available
What makes the collections of international institutions or regimes governing various domains—called in the literature regime, institutional, or governance complexes—“complex”? This article examines several conditions for complexity discussed in that literature and finds them necessary but not sufficient. It argues that the sufficient condition is dependence and outlines a framework of increasing levels of synchronic (social/spatial) and diachronic (temporal) dependence. Putting dependence at the centre of discussions on regime complexes has four advantages: (1) it is analytically more precise a condition than proliferation or linkage; (2) it orients us toward questions of degree, ‘how complex’, instead of the binary ‘whether complex’; (3) it informs a range of research design and theoretical choices, especially highlighting extra-dyadic dependencies and an underdeveloped temporal dimension; and (4) it arguably reconciles competing uses of the term “complex” in the literature without conflating it with complexity, structure, or topology.
... What is needed is a methodology that exists in this middle ground between difficult empirical work on the one hand, and often abstract theory on the other. Simulation has long filled this role of bridging theory and empirical work in the social sciences [18,25,55], and more recently has proved useful in the study of algorithmic fairness. For example, [41], and later [17], use simulation to show concretely how the abstract notion of feedback loops in predictive policing can lead to settings where police unfairly target one population over another, despite equal crime rates. ...
... Like Bokányi and Hannák [5], we use an agent-based model here. Agent-based modeling involves the construction of virtual worlds in which artificial agents interact according to a set of pre-specified rules [18]. While behavior is thus predictable at the individual level, ABMs are useful for understanding how these pre-specified microlevel behaviors result in emergent properties, e.g., social biases across social groups [35] or opinion polarization [56]. ...
Preprint
Full-text available
Perhaps the most controversial questions in the study of online platforms today surround the extent to which platforms can intervene to reduce the societal ills perpetrated on them. Up for debate is whether there exist any effective and lasting interventions a platform can adopt to address, e.g., online bullying, or if other, more far-reaching change is necessary to address such problems. Empirical work is critical to addressing such questions. But it is also challenging, because it is time-consuming, expensive, and sometimes limited to the questions companies are willing to ask. To help focus and inform this empirical work, we here propose an agent-based modeling (ABM) approach. As an application, we analyze the impact of a set of interventions on a simulated online dating platform on the lack of long-term interracial relationships in an artificial society. In the real world, a lack of interracial relationships are a critical vehicle through which inequality is maintained. Our work shows that many previously hypothesized interventions online dating platforms could take to increase the number of interracial relationships from their website have limited effects, and that the effectiveness of any intervention is subject to assumptions about sociocultural structure. Further, interventions that are effective in increasing diversity in long-term relationships are at odds with platforms' profit-oriented goals. At a general level, the present work shows the value of using an ABM approach to help understand the potential effects and side effects of different interventions that a platform could take.
... The critical research problem is that of verifying and validating agent-based models (ABM) [9,10] (in short, the V & V problem). Theoretical frameworks have been proposed that attempt to deal with this issue, such as the generative approach to social simulations [11,12], model calibration, docking/alignment [13,14], replication [15], model-to-model analysis [16], etc. But there is no consensus on what verification and validation mean (see also [17,18,19,20]). ...
... -Verifying and validating social models (including simulation models) needs to address issues pertaining to explanation and causality. Statistical testing guidelines pertaining to replication, such as those discussed in [9], or generative explanations such as those proposed in [11,12] are necessary but not sufficient. On the other hand social mechanisms, being in one acception "interpretations in term of individual behavior of a model that abstractly reproduces the phenomenon that needs explaining" [94] naturally complete and complement these methods (see also [96]). ...
Preprint
We advocate the development of a discipline of interacting with and extracting information from models, both mathematical (e.g. game-theoretic ones) and computational (e.g. agent-based models). We outline some directions for the development of a such a discipline: - the development of logical frameworks for the systematic formal specification of stylized facts and social mechanisms in (mathematical and computational) social science. Such frameworks would bring to attention new issues, such as phase transitions, i.e. dramatical changes in the validity of the stylized facts beyond some critical values in parameter space. We argue that such statements are useful for those logical frameworks describing properties of ABM. - the adaptation of tools from the theory of reactive systems (such as bisimulation) to obtain practically relevant notions of two systems "having the same behavior". - the systematic development of an adversarial theory of model perturbations, that investigates the robustness of conclusions derived from models of social behavior to variations in several features of the social dynamics. These may include: activation order, the underlying social network, individual agent behavior.
... The idea of reproducing market-level characteristics from firm-level behavior 315 and interaction necessitates studying evolving industry dynamics and considering 316 market change as a generative process (Chang 2018;Epstein 1999). This shift 317 involves rethinking the ways we understood and modeled social systems so far, espe-318 ...
... Considering markets as generative processes by means of computational mod-349 eling also brings about interpretative and methodological issues. The generative 350 perspective is different from the traditional deductive and inductive approaches 351 (Epstein 1999). Finding a generative explanation through computational modeling 352 basically follows an abductive process (Davis 2018;Miller 2015). ...
Chapter
Social scientists have long studied the evolution of market structures and have tried to explore the internal mechanisms of industrial dynamics. Market evolution has attracted the attention of researchers from fields such as industrial organization, economic sociology, and management. The tradition in industrial organization has focused on empirically understanding the relationship between market structure, firm’s conduct, and performance. Economic sociologists have explored effects on firms’ entry and exit rates and have developed explanatory mechanisms about market formation according to the interplay of such entry and exit rates. Management scholars have used the so-called NK fitness landscape imagery to relate firm-level adaptation features to industry dynamics. Nonetheless, researchers in these distinct fields have come to realize that a better understanding of market evolution would imply considering real (i.e., boundedly rational) firm behavior, out-of-equilibrium processes, and time dynamics. Computational approaches have emerged naturally as a way to explore new theoretical frameworks by simultaneously inspecting the joint effect of adaptive features of individual firms and market selection forces, among others. In this chapter, I review the implications of using agent-based computational modeling to study market structures as emergent properties of the interplay between entry, exit, and adaptation of heterogeneous firms. This emphasizes an abductive approach, in addition to the traditional deductive and inductive ones, to study market processes.
... Axelrod labels this a generative approach 378 (given that it generates the artificial system) and regards it as the "Third way of 379 doing science" (Axelrod 1997). In the same vein, as Epstein says "If you haven't 380 created it, you haven't explained it" (Epstein 1999). Of course, no practitioner of 381 ABM claims that this generative method is always easy to do, but at least, it is 382 another possibility to explore. ...
Chapter
In this chapter, we introduce the general use of agent-based modeling (ABM) in social science studies and in particular in psychological research. Given that ABM is frequently used in many disciplines in social sciences, as the main research tool or in conjunction with other modeling approaches, it is rather surprising its infrequent use in psychology. There are many reasons for that infrequent use of ABM in psychology, some justified, but others stem from not knowing the potential benefits of applying ABM to psychological research. Thus, we begin by giving a brief overview of ABM and the stages one has to go through to develop and analyze such a model. Then, we present and discuss the general drawbacks of ABM and the ones specific to psychology. Through that discussion, the reader should be able to better assess whether those disadvantages are sufficiently strong for precluding the application of ABM to his/her research. Finally, we end up by stating the benefits of ABM and examining how those advantages may outweigh the potential drawbacks, thus making ABM a valuable tool to consider in psychological research.
... Since the model investigated here is characterized by a high degree of heterogeneity with respect to the structure and the dynamic interactions between agents and their environment, agent-based simulation seems to be an appropriate research method to deal with the complexity of the research problem. In addition, an agent-based simulation also allows to observe the agents' behavior as well as the system's behavior on the macro-level in time, which otherwise cannot be derived in relation to a "functional relationship" from the individual behaviors of those agents (e.g., Epstein 2006;Wall 2016). ...
Preprint
Full-text available
This paper focuses on specific investments under negotiated transfer pricing. Reasons for transfer pricing studies are primarily to find conditions that maximize the firm's overall profit, especially in cases with bilateral trading problems with specific investments. However, the transfer pricing problem has been developed in the context where managers are fully individual rational utility maximizers. The underlying assumptions are rather heroic and, in particular, how managers process information under uncertainty, do not perfectly match with human decision-making behavior. Therefore, this paper relaxes key assumptions and studies whether cognitively bounded agents achieve the same results as fully rational utility maximizers and, in particular, whether the recommendations on managerial-compensation arrangements and bargaining infrastructures are designed to maximize headquarters' profit in such a setting. Based on an agent-based simulation with fuzzy Q-learning agents, it is shown that in case of symmetric marginal cost parameters, myopic fuzzy Q-learning agents invest only as much as in the classic hold-up problem, while non-myopic fuzzy Q-learning agents invest optimally. However, in scenarios with non-symmetric marginal cost parameters, a deviation from the previously recommended surplus sharing rules can lead to higher investment decisions and, thus, to an increase in the firm's overall profit.
... Ordet betegner noget, der uventet dukker op, noget, der vokser ud af noget andet. Emergens defineres som "(…) macroscopic regularities arising from purely local interactions of the agents"(Epstein, 1999). Emergens er altså det faenomen, at helheden er mere end delene. ...
Article
I socialpsykologisk gruppeforskning er der en tendens til at fokusere på gruppens indflydelse på individet, mens interaktionen mellem individ og gruppe ofte fortoner sig. Grupper har imidlertid mange variationsformer, og nyere teorier sætter spørgsmålstegn ved den måde, som socialpsykologien beskæftiger sig med det sociale og konteksten på. I denne artikel tolker vi social identitetsteori ud fra nyere objekt-orienterede teorier, som gør det muligt at forstå, hvordan gruppen skaber sit eget objekt og håndterer afvigelser. Denne tilgang gør det også muligt at forstå, hvordan konfigurationen af komponenter i gruppen får indflydelse på transformationen af individet i gruppen. Konfigurationen i gruppen synes at være afgørende for grænserne for afvigelse i gruppen og dermed også for individets transformation i gruppen.
... It is important to keep in mind that ABM are abstract and simplified representation of reality and therefore can only do so much in transposing reality to 'virtuality', explaining observed phenomena or foreseeing future realities. In most studies using ABM, the main notion is local or location Epstein, 1999). Models in which agents roam through the simulation environment are frequent. ...
Chapter
This study presents a state of the art and a systematic review of literature that identifies the driving forces of land use/cover change (LUCC) and aims to move the discussion forward on the role of social actors in the direct and indirect drivers of land use change in the drylands of South America. Specifically, this review focuses on the characterization of how LUCC studies have addressed the factors driving territorial transformations in drylands, and their main related physical-biological and socioeconomic consequences. In this regard, there are on the one hand studies focused on describing the processes of land use changes from frameworks that are generally qualitative and poorly spatialised. On the other hand—particularly in South America—there are studies that delve into LUCC with very precise descriptions in the spatial context, but do not always manage to articulate a social and cultural approach that incorporates the qualitative explanations that we find in the first type of studies.
... It is important to keep in mind that ABM are abstract and simplified representation of reality and therefore can only do so much in transposing reality to 'virtuality', explaining observed phenomena or foreseeing future realities. In most studies using ABM, the main notion is local or location Epstein, 1999). Models in which agents roam through the simulation environment are frequent. ...
Preprint
This chapter presents a Land Use Cover Change (LUCC) model application developed for Greater Sydney. It aims to help decision making in the context of the strategic and spatial planning of Greater Sydney. To this end, the model simulates the dynamics of industrial, low density residential and medium-high density residential areas at spatial resolution of 100x100 m.A series of three Land Use Maps at 30m were specifically developed for the modelling exercise. They determine part of the exercise´s limitations, such as the model simplicity and the short timeframe of the simulation (2006-2011-2016). Future efforts should focus on the simulation of population and job growth, exchanges between regions and the application of the model for scenario analysis and impact assessment.All data of the modelling exercise are openly available at the CityData portal of the City Futures Research Centre.
... Kelly et al. [44] reviewed five types of common integrated modeling approaches, three of which are: (i) a system dynamic modeling (SDM) approach [44][45][46][47][48] that does not require algorithmic codes, interprets how changes in one part of the system impact the system as a whole, and gives guidance for considering alternative scenarios. However, one major limitation of this approach is that it does not support the representation of spatial variables and spatial variability in the modeled system, for which geographical information systems (GIS) is suggested; (ii) the Bayesian-networks approach which is suitable for the system's uncertain inputs and outputs [22,44,[49][50][51], and enables the use of objective and subjective data, provides an empirical solution, and can be continuously updated to include new information; (iii) an agent-based modeling (ABM) approach which models social interactions between heterogeneous agents in a system, distributed spatially across a shared physical environment [52][53][54][55][56][57][58][59][60][61] and accounts for influences exerted by agents in the system as well as the effects of such influences on particular agents. Yet, ABM inhibits the tradeoff between representational accuracy and the tractability of its analysis. ...
Article
Full-text available
Household water food and energy (WFE) expenditures, reflect respective survival needs for which their resources and social welfare are inter-related. We developed a policy driven quantitative decision-making strategy (DMS) to address the domain geospatial entities’ (nodes or administrative districts) of the WFE nexus, assumed to be information linked across the domain nodal-network. As investment in one of the inter-dependent nexus components may cause unexpected shock to the others, we refer to the WFE normalized expenditures product (Volume) as representing the nexus holistic measure. Volume rate conforms to Boltzman entropy suggesting directed information from high to low Volume nodes. Our hypothesis of causality-driven directional information is exemplified by a sharp price increase in wheat and rice, for U.S. and Thailand respectively, that manifests its impact on the temporal trend of Israel’s administrative districts of the WFE expenditures. Welfare mass (WM) represents the node’s Volume combined with its income and population density. Formulation is suggested for the nodal-network WM temporal balance where each node is scaled by a human-factor (HF) for subjective attitude and a superimposed nodal source/sink term manifesting policy decision. Our management tool is based on two sequential governance processes: one starting with historical data mapping the mean temporal nodal Volumes to single out extremes, and the second is followed by WM balance simulation predicting nodal-network outcome of policy driven targeting. In view of the proof of concept by model simulations in in our previous research, here HF extends the model and attention is devoted to emphasize how the current developed decision-making approach categorically differs from existing nexus related methods. The first governance process is exemplified demonstrating illustrations for Israel’s districts. Findings show higher expenditures for water and lower for energy, and maps pointing to extremes in districts’ mean temporal Volume. Illustrations of domain surfaces for that period enable assessment of relative inclination trends of the normalized Water, Food and Energy directions continuum assembled from time stations, and evolution trends for each of the WFE components.
... Three main simulation approaches are System Dynamics (SD) [2], Discrete-Event Simulation (DES) [3], Agent-Based Modeling (ABM) [4]. ABM typically deals with complex systems, where the interaction between multiple actors are neither easily predictable with systems of equations, as in SD approaches, nor with sequences of events, as in DES [5,6]. ...
Article
Full-text available
We explore the Covid-19 diffusion with an agent-based model of an Italian region with a population on a scale of 1:1000. We also simulate different vaccination strategies. From a decision support system perspective, we investigate the adoption of artificial intelligence techniques to provide suggestions about more effective policies. We adopt the widely used multi-agent programmable modeling environment NetLogo, adding genetic algorithms to evolve the best vaccination criteria. The results suggest a promising methodology for defining vaccine rates by population types over time. The results are encouraging towards a more extensive application of agent-oriented methods in public healthcare policies.
... In addition, compared to the models that demonstrate the agricultural rebound phenomenon based on decisions driven by strictly economic incentives (e.g., Gómez and Pérez-Blanco, 2014), the ABAD model adopts an alternative structure in three ways (Ghoreishi et al., 2021). First, unlike models that are based on the assumption that humans maximize their benefits, or producers maximize profit, ABAD uses a "bounded rationality" decision structure whereby agents have imperfect knowledge of system relationships and values and are only able to seek satisficing solutions rather than optimal ones (Epstein, 1999;Gigerenzer and Selten, 2001;Simon, 1955). Second, most models in the literature have been constructed with economic decision-making rules driven by market-based price incentives (e.g., Gómez and Pérez-Blanco, 2014;Huffaker, 2008), whereas the ABAD model includes economic and social factors (e.g., social interaction among farmers) influencing farmers' decision-making. ...
Article
Modernizing traditional irrigation systems has long been recognized as a means to reduce water losses. However, empirical evidence shows that this practice may not necessarily reduce water use in the long run; in fact, in many cases, the converse is true—a concept known as the rebound phenomenon. This phenomenon is at the heart of a fundamental research gap in the explicit evaluation of co-evolutionary dynamics and interactions among socio-economic and hydrologic factors in agricultural systems. This gap calls for the application of systems-based methods to evaluate such dynamics. To address this gap, we use a previously developed Agent-Based Agricultural Water Demand (ABAD) model, applied to the Bow River Basin (BRB) in Canada. We perform a time-varying variance-based global sensitivity analysis (GSA) on the ABAD model to examine the individual effect of factors, as well as their joint effect, that may give rise to the rebound phenomenon in the BRB. Our results show that economic factors dominantly control possible rebounds. Although social interaction among farmers is found to be less influential than the irrigation expansion factor, its interaction effect with other factors becomes more important, indicating the highly interactive nature of the underlying socio-hydrological system. Based on the insights gained via GSA, we discuss several strategies, including community participation and water restrictions, that can be adopted to avoid the rebound phenomenon in irrigation systems. This study demonstrates that a time-varying variance-based GSA can provide a better understanding of the co-evolutionary dynamics of the socio-hydrological systems and can pave the way for better management of water resources.
... By using ABM, it is possible to determine how a system responds to modifications of different conditions. Due to its specific characteristics, ABM allows for a differentiated social science approach, which can be described as "generative," since both the micro-and macro-levels are addressed (Epstein 1999). Therefore, it is often argued that agent-based modeling "is a third way of science" (Axelrod 1997, p. 21) and could complement traditional deductive (positivism) and inductive (interpretivism) reasoning as methods of discovery (Macal and North 2009). ...
Chapter
Trust plays a pivotal role in many different contexts and thus has been investigated by researchers in a variety of disciplines. In this chapter, we provide a comprehensive overview of methodological approaches to investigating trust and its antecedents. We explain how quantitative methods may be used to measure expectations about a trustee or instances of communication about trust efficiently, and we explain how using qualitative measures may be beneficial to researching trust in less explored contexts and for further theory development. We further point out that mixed methods research (uniting both quantitative and qualitative approaches) may be able to grasp the full complexity of trust. Finally, we introduce how agent-based modeling may be used to simulate and predict complex trust relationships on different levels of analysis. We elaborate on challenges and advantages of all these different methodological approaches to researching trust and conclude with recommendations to guide trust researchers in their planning of future investigations on both situational trust and long-term developments of trust in different contexts, and we emphasize why we believe that such undertakings will benefit from interdisciplinary approaches.
... Following the motto of generative social science -"If you didn't grow it, you didn't explain it." [7] -the forward approach is where, if an agent-based model with a defined set of mechanisms can produce the target social phenomenon, the model is a candidate explanation for the phenomenon. However, this does not mean that the candidate is unique -there can be other models that can generate the same target phenomenon. ...
Chapter
Different theoretical mechanisms have been proposed for explaining complex social phenomena. For example, explanations for observed trends in population alcohol use have been postulated based on norm theory, role theory, and others. Many mechanism-based models of phenomena attempt to translate a single theory into a simulation model. However, single theories often only represent a partial explanation for the phenomenon. The potential of integrating theories together, computationally, represents a promising way of improving the explanatory capability of generative social science. This paper presents a framework for such integrative model discovery, based on multi-objective grammar-based genetic programming (MOGGP). The framework is demonstrated using two separate theory-driven models of alcohol use dynamics based on norm theory and role theory. The proposed integration considers how the sequence of decisions to consume the next drink in a drinking occasion may be influenced by factors from the different theories. A new grammar is constructed based on this integration. Results of the MOGGP model discovery process find new hybrid models that outperform the existing single-theory models and the baseline hybrid model. Future work should consider and further refine the role of domain experts in defining the meaningfulness of models identified by MOGGP.
... Gardner, 1970;Wolfram, 2002). This "simulationist" interpretation of Grice's program of creature construction brings to mind agent-based simulations in other disciplines, such as computer science and sociology, where artificial agents are endowed with simple behavioral rules, and the macro-level outputs that these rules can generate are then studied through simulations from contrasting initial conditions, with the possibility of subsequently adjusting the behavioral rules or the initial conditions so as to test out alternative trajectories through state-space (Epstein, 1999;Gardner, 1970;Wolfram, 2002). For example, in Thomas Schelling's (1978) well-known checkerboard model of racial segregation, agents have a mild preference for staying next to agents of their own type, and they move around on a two-dimensional grid depending on whether this preference is satisfied. ...
Article
Full-text available
This paper analyzes three contrasting strategies for modeling intentional agency in contemporary analytic philosophy of mind and action, and draws parallels between them and similar strategies of scientific model-construction. Gricean modeling involves identifying primitive building blocks of intentional agency, and building up from such building blocks to prototypically agential behaviors. Analogical modeling is based on picking out an exemplary type of intentional agency, which is used as a model for other agential types. Theoretical modeling involves reasoning about intentional agency in terms of some domain-general framework of lawlike regularities, which involves no detailed reference to particular building blocks or exemplars of intentional agency (although it may involve coarse-grained or heuristic reference to some of them). Given the contrasting procedural approaches that they employ and the different types of knowledge that they embody, the three strategies are argued to provide mutually complementary perspectives on intentional agency.
... In general, use patterns and research across disciplines shows that ABM is best used when: (a) interactions between agents 201 are complex, (b) agent's positions are not fixed, (c) the population is heterogeneous, (d) the topology of the interactions 202 is complex, and (d) agents have complex behavior (18). ABM can help us to trace how agents' (individual) rules generate 203 macroscopic regularities and also to separate individual rationality from macroscopic equilibrium and decision science from 204 social science in general (27). ABM is useful for theoretical and empirical research of complex systems because, if/when the 205 system entities and their relationships can be defined, this definition can be used to determine or observe observed the emergent 206 system-level behaviors and to explore different scenarios that could or will occur in the future (28). ...
Preprint
Full-text available
In this chapter, we provide the basis for why a Complex Adaptive Systems (CAS) approach is critically important to solving problems. As a naturalistic reality, a theoretical framework, or a heuristic device, CAS is a powerful tool. Agent Based Modeling (ABM), too, can be a powerful way to discreetly model a CAS understanding of phenomena. When ABM can be used, it is a powerful tool. However, ABM has drawbacks that make it an infeasible method the majority of the time. These drawbacks should not preclude us from using CAS as an alternative, “mixed,” “qualitative,” or “heuristic” method. For this reason, we have outlined a new approach—the Agent Based Approach or ABA—which can be used the majority of the time when CAS analysis is indicated. ABA was developed by Drs. Derek and Laura Cabrera at Cornell University with an expressed focus on the analyses of CASs leading to specific policy recommendations, although it is quite possible that many aspects of ABA can be applied outside of a policy context. This is especially true in light of more nuanced and expanded definitions of “policy” and/or “policy analyst.” Where one might think of “a policy” as a statement or document presented in a legislative, legal, or bureaucratic context, we promote that an understanding of CAS means that a “policy” is simply a set of guidelines for understanding how agent action (the following of simple rules) will affect emergent properties. In other words, policy can be defined as “a statement of the simple interaction rules that one predicts will lead to desired systemic change.” In this regard, “a policy” is something any person might utilize anywhere, not merely something a policy analyst in the halls of Congress might use. To understand and effect change on any CAS—even if the CAS is your family, your classroom, your team, your organization, the state or federal system, or a global crisis—requires you understand the system, to identify the types of actions that can be taken to alter it, and then codify those generalizable actions into specific recommendations. The steps to doing this are the same regardless of the venue or scale. The degree, scale, resources available, timeline, stakeholders, and relative complexity may change, but the basic process does not. In other words, policy analysis is a fractal pattern and “the analysis of policy” is for everyone (i.e., not limited to legislative personnel). Therefore, ABA is a tool that anyone can use, and is well-suited for the formally trained policy analyst. ABA provides another tool in the systems scientist’s tool belt and is an invaluable means by which policy students or scientists can better understand and effect complex adaptive systems.
... Agent-based models (ABMs) can be a useful tool for modeling and understanding how macro-scale/aggregate features of complex systems emerge from micro-scale/individual decisions, interactions, and feedbacks ("generative" social science (Epstein, 1999)). As a result, ABMs are used in many application areas, including land use change (Parker et al., 2003;Evans and Kelley, 2004;Brown et al., 2005;Evans and Kelley, 2008;Kelley and Evans, 2011;Evans et al., 2013;Brown et al., 2017;Dou et al., 2020;Li et al., 2020), ecology (Black and McKane, 2012;Grimm, 1999;van der Vaart et al., 2016), and climate change adaptation (Balbi et al., 2013;Barthel et al., 2008;Gerst et al., 2013;Schneider et al., 2000;Ziervogel et al., 2005;Lamperti et al., 2020). ...
Article
Full-text available
Agent-based models (ABMs) are widely used to analyze coupled natural and human systems. Descriptive models require careful calibration with observed data. However, ABMs are often not calibrated in a formal sense. Here we examine the impact of data record size and aggregation on the calibration of an ABM for housing abandonment in the presence of flood risk. Using a perfect model experiment, we examine (i) model calibration and (ii) the ability to distinguish a model with inter-agent interactions from one without. We show how limited data sets may not adequately constrain a model with just four parameters and relatively minimal interactions. We also illustrate how limited data can be insufficient to identify the correct model structure. As a result, many ABM-based inferences and projections rely strongly on prior distributions. This emphasizes the need for utilizing independent lines of evidence to select sound and informative priors.
... Third, the agents display self-adaptation or self-organization behavior when they interact with the environment (Drazin and Sandelands 1992;Gell-Mann 1994). Finally, patterns of change emerge at the system level due to agents' self-adaptation and self-organization behaviors (Anderson 1999;Epstein 1999;Kauffman 1995). ...
Article
Full-text available
Firms increasingly put financial pressure on their suppliers, also called squeezing. Suppliers react and adapt to financial squeeze as autonomous agents, causing complex ripple effects across the extended supply chain network. To capture intertwined and highly interactive effects among suppliers, we use agent-based models. We explore the impact of financial squeeze on supply chain network structure and operational outcomes. Results suggest that financial squeeze affects the stability of the supply chain network and the effect varies depending on the location of the suppliers. Firms located at the bottom of the supply chain network suffer most from financial squeeze, and the magnitude of the effect increases as one goes further upstream. In addition, as existing suppliers exit the network and new suppliers enter, three network archetypes (Empty Nest, TransitUp, and StableDown) emerge. We identify the condition and operational consequences associated with these three archetypes. Our findings are informative to managers at buyer firms about the impacts of squeezing strategy on their extended supply chain partners, who often times are out of their immediate purview.
... Some researchers have remarked that, in many agent-based models, making the agents more "intelligent" speeds up the result [14]. This observation resounds with the debate on "rational expectations" that agitated economics in the 1970s, 1980s and 1990s. ...
Preprint
Full-text available
We dare to make use of a possible analogy between neurons in a brain and people in society, asking ourselves whether individual intelligence is necessary in order to collective wisdom to emerge and, most importantly, what sort of individual intelligence is conducive of greater collective wisdom. We review insights and findings from connectionism, agent-based modeling, group psychology, economics and physics, casting them in terms of changing structure of the system's Lyapunov function. Finally, we apply these insights to the sort and degrees of intelligence of preys and predators in the Lotka-Volterra model, explaining why certain individual understandings lead to co-existence of the two species whereas other usages of their individual intelligence cause global extinction.
... For example, the researcher may be interested in finding out whether an eventually idiosyncratic explanation (Eisenhardt 1989) for a certain observation may apply to other contexts, i.e., in exploring the explanation s generalizability. Then, ACE may provide a means for, first, studying whether the observation can be reproduced in a model -i.e., `grown´ in the words of Epstein (1999); if so, the researcher can, second, explore further contexts by variation of parameters in order to figure out under which conditions the phenomenon observed in the field emerges. ...
Preprint
Agent-based computational economics (ACE) - while adopted comparably widely in other domains of managerial science - is a rather novel paradigm for management accounting research (MAR). This paper provides an overview of opportunities and difficulties that ACE may have for research in management accounting and, in particular, introduces a framework that researchers in management accounting may employ when considering ACE as a paradigm for their particular research endeavor. The framework builds on the two interrelated paradigmatic elements of ACE: a set of theoretical assumptions on economic agents and the approach of agent-based modeling. Particular focus is put on contrasting opportunities and difficulties of ACE in comparison to other research methods employed in MAR.
... For earlier vintages of the K + S model seeDosi et al. (2010Dosi et al. ( , 2013Dosi et al. ( , 2015 and the survey inDosi et al. (2017b).Lamperti et al. (2018) extend the K + S model to account for the coevolution of climate and macroeconomic dynamics.2 For a general overview of ABM applications in economics and the social sciences, seeTesfatsion (2006),Epstein (1999) andGilbert (2008).Axelrod and Tesfatsion (2006) provide a concise introduction. ...
Article
Full-text available
In this work we discuss the research findings from the labour-augmented Schumpeter meeting Keynes (K + S) agent-based model. It comprises comparative dynamics experiments on an artificial economy populated by heterogeneous, interacting agents, as workers, firms, banks and the government. The exercises are characterized by different degrees of labour flexibility, or by institutional shocks entailing labour market structural reforms, wherein the phenomenon of hysteresis is endogenous and pervasive. The K + S model constitutes a laboratory to evaluate the effects of new institutional arrangements as active/passive labour market policies, and fiscal austerity. In this perspective, the model allows mimicking many of the customary policy responses which the European Union and many Latin American countries have embraced in reaction to the recent economic crises. The obtained results seem to indicate, however, that most of the proposed policies are likely inadequate to tackle the short-term crises consequences, and even risk demoting the long-run economic prospects. More objectively, the conclusions offer a possible explanation to the negative path traversed by economies like Brazil, where many of the mentioned policies were applied in a short period, and hint about some risks ahead.
... For example, the researcher may be interested in finding out whether an eventually idiosyncratic explanation (Eisenhardt 1989) for a certain observation may apply to other contexts, i.e., in exploring the explanation s generalizability. Then, ACE may provide a means for, first, studying whether the observation can be reproduced in a model -i.e., `grown´ in the words of Epstein (1999); if so, the researcher can, second, explore further contexts by variation of parameters in order to figure out under which conditions the phenomenon observed in the field emerges. ...
Article
Full-text available
Agent-based computational economics (ACE)—while adopted comparably widely in other domains of managerial science—is a rather novel paradigm for management accounting research (MAR). This paper provides an overview of opportunities and difficulties that ACE may have for research in management accounting and, in particular, introduces a framework that researchers in management accounting may employ when considering ACE as a paradigm for their particular research endeavor. The framework builds on the two interrelated paradigmatic elements of ACE: a set of theoretical assumptions on economic agents and the approach of agent-based modeling. Particular focus is put on contrasting opportunities and difficulties of ACE in comparison to other research methods employed in MAR. JEL Classifications: C63; D8; D91; M40.
... They are heterogeneous, boundedly rational, and act autonomously in an explicit space of local interactions. As summarized in [7], these characteristics of the model define an agent-based model [8]. The individual search processes, individual decision-making, and individual learning result in the macrobehavior of the firm, incrementally finding betterperforming strategies. ...
Conference Paper
Full-text available
This work researches the impact of including a wider range of participants in the strategy-making process on the performance of organizations, which operate in either moderately or highly complex environments. Agent-based simulation demonstrates that the increased number of ideas generated from larger and diverse crowds and subsequent preference aggregation lead to the rapid discovery of higher peaks in the organization's performance landscape. However, this is not the case when the expansion in the number of participants is small. The results confirm the most frequently mentioned benefit in the Open Strategy literature: the discovery of better-performing strategies.
Book
Full-text available
After the Great Financial Crisis, economic theory was fiercely criticized from both outside and inside the discipline for being incapable of explaining a crisis of such magnitude. Slowly but persistently, new strands of economic thought are developing, to replace the old-fashioned neoclassical economic theory, which have a common characteristic: they are better suited to help understand the real-world economy. This book explores the key tenets and applications of these. This book opens with an explanation of the “real world” approach to economics in which theoretical models resemble real-world situations, realistic assumptions are made, and factors such as uncertainty, coordination problems, and bounded rationality are incorporated. Additionally, this book explores the ramifications of considering the economy as both a dynamic system - with a past, present, and future - and a complex one. These theoretical precepts of the real-world economy are then applied to some of the most pressing economic issues facing the world today including ecological sustainability, the rise of corporate power, the growing dominance of the financial world, and rising unemployment, poverty, and inequality. In each case, this book reveals the insights of the shortcomings of the neoclassical approach which fails to illuminate the complexities behind each issue. It is demonstrated that, by contrast, adopting an approach grounded in the real world has the power to produce policy proposals to help tackle these problems. This book is essential reading for anyone seeking a deeper understanding of the economy, including readers from economics and across the social sciences.
Article
Argument Agent-based social simulations have historically been evaluated using two criteria: verification and validation. This article questions the adequacy of this dual evaluation scheme. It claims that the scheme does not conform to everyday practices of evaluation, and has, over time, fostered a theory-practice gap in the assessment of social simulations. This gap originates because the dual evaluation scheme, inherited from computer science and software engineering, on one hand, overemphasizes the technical and formal aspects of the implementation process and, on the other hand, misrepresents the connection between the conceptual and the computational model. The mismatch between evaluation theory and practice, it is suggested, might be overcome if practitioners of agent-based social simulation adopt a single criterion evaluation scheme in which: i) the technical/formal issues of the implementation process are tackled as a matter of debugging or instrument calibration, and ii) the epistemological issues surrounding the connection between conceptual and computational models are addressed as a matter of validation.
Chapter
Criminal organisations pose a direct threat and disruption to society as they have a destructive impact on daily life and weaken the social fabric and legitimacy of society and economy. The complexity and inherent resilience of this type of subversive crime cause counter measures to appear reasonably effective when analysed in isolation. However, when implemented, they may produce unanticipated effects, counteract effects of other interventions, or harm security altogether. There is a pressing need for anticipating illicit behaviour resilience and innovative approaches to deal with this challenge. Anticipatory intelligence supports the exploration of near-future criminal organisation evolution and the identification of opportunities for sustainable and effective counter measures. This entails an understanding of criminal organisation mechanisms and in particular the analysis of criminal behavioural resilience to counter measures. Understanding this inherent complexity embedded in the cat-and-mouse game between criminals and law enforcement enables the analysis of the impact of crime prevention and law enforcement strategies. This chapter approaches the resilience of criminal organisations from a complex systems perspective and shows how different types of anticipatory intelligence approaches (from qualitative to quantitative) can be used in practice. It also proposes a novel complexity-based hybrid methodology approach to provide law enforcement the capability to be two steps ahead of criminal organisations.KeywordsAnticipatory intelligenceCriminal organisationsComplex systemsResilienceHybrid modelling
Chapter
Full-text available
With the growth in tourism demand as a vital economic activity worldwide, the tourism system may reach critical thresholds, encompassing transformations of territories towards its touristification. This process has led to land use/cover change (LUCC). In Portugal high levels of tourism demand have resulted in a tourism development model set on land use artificialisation and intensification. Even though tourism development has spatial implications, there are few empirical studies in the literature, mostly because tourism direct LUCC is difficult to track. This book chapter proposes a Cellular Automata–Agent-based model to integrate tourism demand forecasts and suitable areas for future tourism development to explore LUCC in 2030 in a region in Southwest Portugal. The results inform spatial planners and decision-makers when designing land use policies that by 2030 patterns of tourism demand increase of 3.5% may result in a 61% increase in tourism LUCC.
Article
When a population exhibits collective cognitive alignment, such that group members tend to perceive, remember, and reproduce information in similar ways, the features of socially transmitted variants (i.e., artifacts, behaviors) may converge over time towards culture‐specific equilibria points, often called cultural attractors. Because cognition may be plastic, shaped through experience with the cultural products of others, collective cognitive alignment and stable cultural attractors cannot always be taken for granted, but little is known about how these patterns first emerge and stabilize in initially uncoordinated populations. We propose that stable cultural attractors can emerge from general principles of human categorization and communication. We present a model of cultural attractor dynamics, which extends a model of unsupervised category learning in individuals to a multiagent setting wherein learners provide the training input to each other. Agents in our populations spontaneously align their cognitive category structures, producing emergent cultural attractor points. We highlight three interesting behaviors exhibited by our model: (1) noise enhances the stability of cultural category structures; (2) short ‘critical’ periods of learning early in life enhance stability; and (3) larger populations produce more stable but less complex attractor landscapes, and cliquish network structure can mitigate the latter effect. These results may shed light on how collective cognitive alignment is achieved in the absence of shared, innate cognitive attractors, which we suggest is important to the capacity for cumulative cultural evolution.
Article
Full-text available
Simulation models of multi‐sector systems are increasingly used to understand societal resilience to climate and economic shocks and change. However, multi‐sector systems are also subject to numerous uncertainties that prevent the direct application of simulation models for prediction and planning, particularly when extrapolating past behavior to a nonstationary future. Recent studies have developed a combination of methods to characterize, attribute, and quantify these uncertainties for both single‐ and multi‐sector systems. Here, we review challenges and complications to the idealized goal of fully quantifying all uncertainties in a multi‐sector model and their interactions with policy design as they emerge at different stages of analysis: (a) inference and model calibration; (b) projecting future outcomes; and (c) scenario discovery and identification of risk regimes. We also identify potential methods and research opportunities to help navigate the tradeoffs inherent in uncertainty analyses for complex systems. During this discussion, we provide a classification of uncertainty types and discuss model coupling frameworks to support interdisciplinary collaboration on multi‐sector dynamics (MSD) research. Finally, we conclude with recommendations for best practices to ensure that MSD research can be properly contextualized with respect to the underlying uncertainties.
Article
Full-text available
The paper explores the usage of agent-based modeling in the context of large event halls evacuation during music festivals and cultural events. An agent-based model is created in NetLogo 6.2.2 for better representing the human behavior when involved in such situations. A series of characteristics have been set for the agents in order to preserve their heterogeneity in terms of speed, age, locomotion impairment, familiarity with the environment, evacuating with another person, choosing the closest exit or not, selecting the closest path to the exits. An “adapted cone exit” approach has been proposed in the paper in order to facilitate the guidance of the agents in the agent-based model to the closest exit and its advantages have been proved in comparison with the classical “cone exit” approach. Different evacuation scenarios have been simulated and analyzed for better observing the capabilities of evacuation modeling in the case of evacuation emergencies. Besides the overall evacuation time, an average evacuation time has been determined for the agents based on the individual evacuation time, which can be easily connected with a risk indicator associated to each situation. Due to the visual interface offered by the agent-based model, coupled with the evacuation indicators, the proposed model can allow the identification of the main factors that may contribute to a prolonged evacuation process (e.g. overcrowding at one of the exits, not choosing the appropriate door, evacuating with a friend/parent) and the potential measures to be considered for insuring a safe evacuation process.
Article
Full-text available
Police corruption, especially in the form of bribery, is a severe social problem in many societies. However, neither the extent nor the factors contributing to police bribery are well understood because of data limitation issues. Understandably, it is incredibly challenging to observe and quantify such bribery, as it is usually considered illegal and/or unethical for police to accept and/or ask for bribes. Agent‐based modelling can solve such data limitation issues because it allows for the realistic modelling of hidden behaviours. This study uses an agent‐based modelling technique to investigate a threshold model of police corruption, more specifically, bribery. The authors assume that agents have a threshold regarding bribery, which may be conceptualised as either an honesty threshold or a risk threshold. The threshold value is a dynamic variable randomly assigned to each agent, and each interaction between citizens and officers possesses the potential to change the threshold of each agent.
Article
Full-text available
Genetic studies of complex traits often show disparities in estimated heritability depending on the method used, whether by genomic associations or twin and family studies. We present a simulation of individual genomes with dynamic environmental conditions to consider how linear and nonlinear effects, gene-by-environment interactions, and gene-by-environment correlations may work together to govern the long-term development of complex traits and affect estimates of heritability from common methods. Our simulation studies demonstrate that the genetic effects estimated by genome wide association studies in unrelated individuals are inadequate to characterize gene-by-environment interaction, while including related individuals in genome-wide complex trait analysis (GCTA) allows gene-by-environment interactions to be recovered in the heritability. These theoretical findings provide an explanation for the "missing heritability" problem and bridge the conceptual gap between the most common findings of GCTA and twin studies. Future studies may use the simulation model to test hypotheses about phenotypic complexity either in an exploratory way or by replicating well-established observations of specific phenotypes.
Conference Paper
Full-text available
p>Agent-based modeling and simulation (ABMS) has become one of the most popular simulation methods. It has been applied to a wide range of application areas including business and management. This article introduces ABMS and explains how it can support management decision making. It covers key concepts and the modeling process. AgentPy is used to show the software implementation of the concepts. This article also provides a literature review on ABMS in business and management research using bibliometric analysis and content analysis. It shows that there has been an increase in the research that uses ABMS and identifies several research clusters across management disciplines such as strategic management, marketing management, operations and supply chain management, financial management, and risk management. </p
Article
Since the pioneering work of Herbert A. Simon, bounded rationality (BR) constitutes a viable alternative to utility maximization in settings characterized by uncertainty about the possible emergence of novel events, missing information, and limitations to human reasoning. Because of its realism, BR gained consensus in organization and management studies. However, BR is a theory of individual decision-making. Substantial extensions are required in order to turn it into a tool to analyze collective decision processes. Following an intuition by the late Simon himself, we submit that organizations channel information flows in ways that alleviate human BR. Thus, analysis and reconstruction of their structure as well as differential degrees and qualities of individual BR within organizations is key to extend this concept to collective decision-making. In this special issue we collected contributions where instances of BR couple with interaction structures to yield collective behavior. Tools range from mathematical models to experimental settings to computational models, testifying to the value of multiple approaches and perspectives.
Article
Full-text available
This study measured the impacts of failure in Crisis and Emergency Risk Communication (CERC) during the outbreak of a contagious Corona viral disease. The study measured the impacts by the number of individuals and hospitals exposed to the virus. The 2015 Middle East Respiratory Syndrome (MERS) outbreak in South Korea was used to investigate the consequences of CERC failure, where the names of hospitals exposed to MERS‐CoV were withheld from the public during the early stage of virus diffusion. Empirical data analyses and simulated model tests were conducted. The findings of analyses and tests show that an early announcement of the hospital names and publicizing the necessary preventive measures could have reduced the rate of infection by approximately 85% and the number of contaminated healthcare facilities by 39% at maximum. This level of reduction is comparable to that of vaccination and of social distancing.
Article
Full-text available
The impact of engineered products is a topic of concern in society. Product impact may fall under the categories of economic, environmental or social impact, with the last category defined as the effect of a product on the day-to-day life of people. Design teams lack sufficient tools to estimate the social impact of products, and the combined impacts of economic, environmental and social impacts for the products they are designing. This paper aims to provide a framework for the estimation of product impact during product design. To estimate product impact, models of both the product and society are required. This framework integrates models of the product, scenario, society and impact into an agent-based model to estimate product impact. Although this paper demonstrates the framework using only social impact, the framework can also be applied to economic or environmental impacts individually or all three concurrently. Agent-based modelling has been used previously for product adoption models, but it has not been extended to estimate product impact. Having tools for impact estimation allows for optimising the product design parameters to increase the potential positive impact and reduce potential negative impact.
Article
Full-text available
The literature in agent-based social simulation suggests that a model is validated when it is shown to ‘successfully’, ‘adequately’ or ‘satisfactorily’ represent the target phenomenon. The notion of ‘successful’, ‘adequate’ or ‘satisfactory’ representation, however, is both underspecified and difficult to generalise, in part, because practitioners use a multiplicity of criteria to judge representation, some of which are not entirely dependent on the testing of a computational model during validation processes. This article argues that practitioners should address social epistemology to achieve a deeper understanding of how warrants for belief in the adequacy of representation are produced. Two fundamental social processes for validation: interpretation and commensuration, are discussed to justify this claim. The analysis is advanced with a twofold aim. First, it shows that the conceptualisation of validation could greatly benefit from incorporating elements of social epistemology, for the criteria used to judge adequacy of representation are influenced by the social, cognitive and physical organisation of social simulation. Second, it evidences that standardisation tools such as protocols and frameworks fall short in accounting for key elements of social epistemology that affect different instances of validation processes.
Article
Full-text available
Agent-based simulation has become an established method for innovation and technology diffusion research. It extends traditional approaches by modeling diffusion processes from a micro-level perspective, which enables the consideration of various heterogeneous stakeholders and their diverse interactions. While such a simulation is well suited to capture the complex behavior of markets, its application is challenging when it comes to modeling future markets. Therefore, we propose a multi-method approach that combines scenario analysis that generates multiple “pictures of the future” with an agent-based market simulation that offers insight into the potential outcomes of today’s strategic (technological) decisions in each of these futures. Thus, simulation results can provide valuable decision support for corporate planners and industrial engineers when they are engaged in technology planning. This paper describes the novel approach and illustrates it through a sample application that is based on an industry-related research project on the development and market introduction of smart products.
Article
The literature on migration during armed conflict is abundant. Yet, the questions of highest policy relevance—how many people will leave because of a conflict and how many more people will be living outside a country because of a conflict—are not well addressed. This article explores these questions using an agent‐based model, a computational simulation that allows us to connect armed conflict to individual behavioral changes and then to aggregate migration flows and migrant stocks. With detailed data from Nepal during the 1996–2006 conflict, we find that out‐migration rates actually decrease on average, largely due to a prior decrease in return migration. Regardless, the stock of migrants outside the country increases modestly during that period. Broadly, this study demonstrates that population dynamics are inherent to and necessary for understanding conflict‐related migration. We conclude with a discussion of the generalizability and policy implications of this study.
Article
Full-text available
With the advent of platform economies and the increasing availability of online price comparisons, many empirical markets now select on relative rather than absolute performance. This feature might give rise to the ‘winner takes all/most’ phenomenon, where tiny initial productivity differences amount to large differences in market shares. We study the effect of heterogeneous initial productivities arising from locally segregated markets on aggregate outcomes, e.g., regarding revenue distributions. Several of those firm-level characteristics follow distributional regularities or ‘scaling laws’ (Brock in Ind Corp Change 8(3):409–446, 1999). Among the most prominent are Zipf’s law describing the largest firms‘ extremely concentrated size distribution and the robustly fat-tailed nature of firm size growth rates, indicating a high frequency of extreme growth events. Dosi et al. (Ind Corp Change 26(2):187–210, 2017b) recently proposed a model of evolutionary learning that can simultaneously explain many of these regularities. We propose a parsimonious extension to their model to examine the effect for deviations in market structure from global competition, implicitly assumed in Dosi et al. (2017b). This extension makes it possible to disentangle the effects of two modes of competition: the global competition for sales and the localised competition for market power, giving rise to industry-specific entry productivity. We find that the empirically well-established combination of ‘superstar firms’ and Zipf tail is consistent only with a knife-edge scenario in the neighbourhood of most intensive local competition. Our model also contests the conventional wisdom derived from a general equilibrium setting that maximum competition leads to minimum concentration of revenue (Silvestre in J Econ Lit 31(1):105–141, 1993). We find that most intensive local competition leads to the highest concentration, whilst the lowest concentration appears for a mild degree of (local) oligopoly. Paradoxically, a level playing field in initial conditions might induce extreme concentration in market outcomes.
Article
Full-text available
Objective To compare the insulin infusion management of critically ill patients by nurses using either a common standard (ie, human completion of insulin infusion protocol steps) or smart agent (SA) system that integrates the electronic health record and infusion pump and automates insulin dose selection. Design A within subjects design where participants completed 12 simulation scenarios, in 4 blocks of 3 scenarios each. Each block was performed with either the manual standard or the SA system. The initial starting condition was randomised to manual standard or SA and alternated thereafter. Setting A simulation-based human factors evaluation conducted at a large academic medical centre. Subjects Twenty critical care nurses. Interventions A systems engineering intervention, the SA, for insulin infusion management. Measurements The primary study outcomes were error rates and task completion times. Secondary study outcomes were perceived workload, trust in automation and system usability, all measured with previously validated scales. Main results The SA system produced significantly fewer dose errors compared with manual calculation (17% (n=20) vs 0, p<0.001). Participants were significantly faster, completing the protocol using the SA system (p<0.001). Overall ratings of workload for the SA system were significantly lower than with the manual system (p<0.001). For trust ratings, there was a significant interaction between time (first or second exposure) and the system used, such that after their second exposure to the two systems, participants had significantly more trust in the SA system. Participants rated the usability of the SA system significantly higher than the manual system (p<0.001). Conclusions A systems engineering approach jointly optimised safety, efficiency and workload considerations.
Article
Full-text available
Objetivo: Analizar la regulación establecida por el Banco de México (2010) en comisiones por transacciones en cajeros automáticos. Metodología: Se utiliza la modelación basada en agentes y el software NetLogo para modelar el mercado de cajeros automáticos en México. Se analizan tres versiones del modelo donde los tarjetahabientes deciden bajo reglas de comportamiento: irracional, probabilística y racional. Limitaciones: En los modelos basados en agentes los resultados dependen de las condiciones establecidas inicialmente, por lo que se calibra el modelo para el contexto mexicano con estadísticas reportadas por el Banco de México y la Comisión Nacional Bancaria y de Valores. Originalidad: Dada la falta de evidencia en la literatura, la investigación contribuye al presentar los efectos de la regulación en cajeros desde una perspectiva teórica utilizando el enfoque computacional basado en agentes. Resultados: Se encuentra que, independientemente de la regla de decisión, los clientes de bancos grandes son beneficiados con la regulación. Conclusiones: La implementación de la cuota de intercambio inversa debe ser ampliamente analizada dado sus efectos directos sobre variables clave del mercado como precios, ganancia de los bancos y excedente de los consumidores.
Article
Full-text available
The matching in college admission is a typical example of applying algorithms in cyberspace to improve the efficiency of the corresponding process in physical space. This paper studies the real-time interactive mechanism (RIM) recently adopted in Inner Mongolia of China, where students can immediately observe the provisional admission results for their applications and are allowed to modify the application before the deadline. Since the universities accept the applications according to the ranking of the scores, RIM is believed to make the competition more transparent. However, students may coordinate to manipulate this mechanism. A high-score student can perform a last-minute change on the university applied, opening a slot for a student with a much lower score. With agent-based simulations, we find that a large portion of students will choose to perform coordinating manipulation, which erodes the welfare and fairness of society. To cope with this issue, we investigate the Multistage RIM (MS-RIM), where students with different ranges of scores are given different deadlines for application modification. We find that the multistage policy reduces the chance of manipulation. However, the incentive to conduct manipulation is increased by a higher success rate of manipulation. Hence, the overall social welfare and fairness are further diminished under MS-RIM with a small number of stages, but are improved if the stage number is large.
Chapter
This work develops a reinforcement learning method for multi-agent negotiation. While existing works have developed various learning methods for multi-agent negotiation, they have primarily focus on the Temporal-Difference (TD) algorithm (action-value methods) in general and overlooked the unique properties of parameterized policy. As such, these methods can be suboptimal for multi-agent negotiation. In this paper, we study the problem of multi-agent negotiation in real-time bidding scenario. We propose a new method named EQL, short for Extended Q-learning, which iteratively assigns the state transition probability and finally converges to a unique optimum effectively. By performing linear approximation of the off-policy critic purposefully, we integrate Expected Policy Gradients (EPG) into basic Q-learning. Importantly, we then propose a novel negotiation framework by accounting for both the EQL and edge computing between mobile devices and cloud servers to handle the data preprocessing and transmission simultaneously to reduce the load of cloud servers. We conduct extensive experiments on two real datasets. Both quantitative results and qualitative analysis verify the effectiveness and rationality of our EQL method.
Thesis
Full-text available
This PhD study investigates how universities can build institutional capacity for mainstreaming e-learning innovations in university teaching practice and maximise the adoption of transformational new methods of teaching and learning. The study focusses on digital technology-enabled learning, known as e-learning, innovations that originate in higher education teaching practice and go on to achieve mainstream adoption within the originating university. Previous research, as indicated in this thesis, suggests that teacher-originated e-learning innovations mostly fail to achieve local mainstream adoption, even where there has been considerable long-term investment in information technology infrastructure and support services in that university. Over the past two decades, studies of this problem around the world have mostly used single and multiple case study and large-scale survey research methods to identify causal and critical success factors, while continuing to view innovation adoption as a single linear process described in theories of diffusion of innovations. In this study, the problem of mainstreaming the diffusion of innovations is viewed through a complex, non-linear, dynamic, systems lens to investigate the multiple relationships between critical success factors associated with key roles played in innovation adoption by actors who represent key university institutional stakeholder groups. Interpretive Case-based Modelling, developed as a new bricolage methodology for conducting this study, applies this complex systems perspective by overlapping case studies with multi-agent computer modelling simulations, guided by an interpretive interactionism research design. The cases and models reported in the study result from interviews with 15 individual volunteer participants located in Australian and New Zealand universities. The computer modelling, conducted in-situ during each interview, uncovers the impacts of the relationships between institutional stakeholder roles in universities when enabling and inhibiting connections and levels of influence are applied using a model framework. The resulting participant insights, gained from modelling both real and ideal case-based scenarios during the interviews, revealed a range of diverse opportunities for harnessing stakeholder relationships for building institutional capacity to facilitate change within the specific context of each case. In this way, the study investigated mainstreaming of e-learning innovation adoption in higher education teaching practice from a new complex systems perspective. Findings from the study suggest Interpretive Case-based Modelling has potential applications in other studies of change in complex social systems, with possibilities for further extension to focus groups.
Article
Full-text available
Agricultural activities have imposed significant impacts on water resources, leading to hypoxic zones and harmful algal blooms all over the world. Government agencies, nongovernmental organizations, and individuals have been making various efforts to reduce this non-point source pollution. Among those efforts, even the more cost-effective examples of performance-based environmental payment programs generally have low participation rates. We investigate the effects of externalities in farmers’ decisions on neighboring farms, incorporating both a knowledge spillover effect and a positive environmental outcome externality of farmers’ best-management practice (BMP) adoption decisions. Our focus is on how these effects may influence the outcome of performance-based payment programs and how policy makers might recognize these effects in the design of cost-effective policies to promote program participation and BMP adoption. Rather than imposing an assumption of profit-maximization or forward-looking behavior, we allow outcomes to emerge from interactions among neighboring farmers. We recommend cost-effective policies across communities depending on their composition. It is more cost-effective to target communities with fewer innovators and/or target the programs towards the least-innovative individuals.
Chapter
Full-text available
TWO USES FOR COMPUTERS: There are two quite different roles that computers might play in biological theorizing. Mathematical models of biological processes are often analytically intractable. When this is so, computers can be used to get a feel for the model’s dynamics. You plug in a variety of initial condition values and allow the rules of transition to apply themselves (often iteratively); then you see what the outputs are, Computers are used here as aids to the theorist. They are like pencil and paper or a sliderule. They help you think. The models being investigated are about life. But there is no need to view the computers that help you investigate these models as alive themselves. Computers can be applied to calculate what will happen when a bridge is stressed, but the computer is not itself a bridge, Population geneticists have used computers in this way since the 1960s. Many participants in the Artificial Life research program are doing the same thing. I see nothing controversial about this use of computers. By their fruits shall ye know them. This part of the AL research program will stand or fall with the interest of the models investigated. When it is obvious beforehand what the model’s dynamics will be, the results provided by computer simulation will be somewhat uninteresting. When the model is very unrealistic, computer investigation of its properties may also fail to be interesting.
Article
Full-text available
This paper develops an evolutionary trade network game (TNG) that combines evolutionary game play with endogenous partner selection. Successive generations of resource-constrained buyers and sellers choose and refuse trade partners on the basis of continually updated expected payo®s. Trade partner selection takes place in accordance with a modied Gale-Shapley matching mechanism, and trades are implemented using trade strategies evolved via a standardly specied genetic algorithm. The trade partnerships resulting from the matching mechanism are shown to be core stable and Pareto optimal in each successive trade cycle. Nevertheless, computer experiments suggest that these static optimality properties may be inadequate measures of optimality from an evolutionary perspective.
Book
Nancy Cartwright argues for a novel conception of the role of fundamental scientific laws in modern natural science. If we attend closely to the manner in which theoretical laws figure in the practice of science, we see that despite their great explanatory power these laws do not describe reality. Instead, fundamental laws describe highly idealized objects in models. Thus, the correct account of explanation in science is not the traditional covering law view, but the ‘simulacrum’ account. On this view, explanation is a matter of constructing a model that may employ, but need not be consistent with, a theoretical framework, in which phenomenological laws that are true of the empirical case in question can be derived. Anti‐realism about theoretical laws does not, however, commit one to anti‐realism about theoretical entities. Belief in theoretical entities can be grounded in well‐tested localized causal claims about concrete physical processes, sometimes now called ‘entity realism’. Such causal claims provide the basis for partial realism and they are ineliminable from the practice of explanation and intervention in nature.
Article
Aggregation means the organization of elements of a system into patterns that tend to put highly compatible elements together and less compatible elements apart. Landscape theory Predicts how aggregation will lead to alignments among actors (such as nations), whose leaders are myopic in their assessments and incremental in their actions. The predicted configurations are based upon the attempts of actors to minimize their frustration based upon their pairwise Propensities to align with some actors and oppose others. These attempts lead to a local minimum in the energy landscape of the entire system. The theory is supported by the results of two cases: the alignment of seventeen European nations in the Second World War and membership in competing alliances of nine computer companies to set standards for Unix computer operating systems. The theory has potential for application to coalitions of political Parties in parliaments, social networks, social cleavages in democracies and organizational structures.
Article
The author examines some major concerns about global economic growth from both theoretical and empirical points of view, using "the limits-to-growth debate as a reference point to understand the earlier debate about the limits to and perils of growth, and to provide some perspective about the newer debate about environmental threats." He concludes that environmental and resource constraints on economic growth should be modest over the next 50 years and that economic growth is possible providing emphasis is given to "the importance of careful scientific and policy analysis and establishing or strengthening institutions that contain incentives that are compatible with the thoughtful balancing of long-run costs and benefits of social investments."
Article
Networks of catalyzed reactions with nonlinear feedback have been proposed to play an important role in the origin of life. We investigate this possibility in a polymer chemistry with catalyzed cleavage and condensation reactions. We study the properties of a well-stirred reactor driven away from equilibrium by the flow of mass. Under appropriate non-equilibrium conditions. The nonlinear feedback of the reaction network focuses the material of the system into a few specific polymer species. The network of catalytic reactions digests'' the material of its environment, incorporating it into its own form. We call the result an autocatalytic metabolism. Under some variations it persists almost unchanged, while in other cases it dies. We argue that the dynamical stability of autocatalytic metabolisms gives them regenerative properties that allow them to repair themselves and to propagate through time. 43 refs., 16 figs., 3 tabs.
Article
Described by the philosopher A.J. Ayer as a work of 'great originality and power', this book revolutionized contemporary thinking on science and knowledge. Ideas such as the now legendary doctrine of 'falsificationism' electrified the scientific community, influencing even working scientists, as well as post-war philosophy. This astonishing work ranks alongside The Open Society and Its Enemies as one of Popper's most enduring books and contains insights and arguments that demand to be read to this day.
Article
Despite tendencies toward convergence, differences between individuals and groups continue to exist in beliefs, attitudes, and behavior. An agent-based adaptive model reveals the effects of a mechanism of convergent social influence. The actors are placed at fixed sites. The basic premise is that the more similar an actor is to a neighbor, the more likely that that actor will adopt one of the neighbor's traits. Unlike previous models of social influence or cultural change that treat features one at a time, the proposed model takes into account the interaction between different features. The model illustrates how local convergence can generate global polarization. Simulations show that the number of stable homogeneous regions decreases with the number of features, increases with the number of alternative traits per feature, decreases with the range of interaction, and (most surprisingly) decreases when the geographic territory grows beyond a certain size.
Article
It has been hoped that computational approaches can help resolve some well-known paradoxes in game theory. We prove that if the repeated prisoner's dilemma is played by finite automata with less than exponentially (in the number of rounds) many states, then cooperation can be achieved in equilibrium (while with exponentially many states, defection is the only equilibrium). We furthermore prove a generalization to arbitrary games and Pareto optimal points. Finally, we present a general model of polynomially computable games, and characterize in terms of familiar complexity classes ranging from NP to NEXP the natural problems that arise in relation with such games.
Article
Milton Friedman (1912–2006) was born in Brooklyn, New York, and received his Ph.D. in economics from Columbia University. He taught at the University of Minnesota, and then for many years at the University of Chicago. After 1977, he was a Senior Research Fellow at the Hoover Institution in Stanford, California. Friedman is best known for his work in monetary theory and for his concern for free enterprise and individual liberty. Milton Friedman was awarded the Nobel Prize in economics in 1976. The following essay, which is reprinted in its entirety, is the most influential work on economic methodology of this century. In his admirable book on The Scope and Method of Political Economy John Neville Keynes distinguishes among “a positive science … [,] a body of systematized knowledge concerning what is; a normative or regulative science … [,] a body of systematized knowledge discussing criteria of what ought to be …; an art … [,] a system of rules for the attainment of a given end”; comments that “confusion between them is common and has been the source of many mischievous errors”; and urges the importance of “recognizing a distinct positive science of political economy.” This [essay] is concerned primarily with certain methodological problems that arise in constructing the “distinct positive science” Keynes called for – in particular, the problem how to decide whether a suggested hypothesis or theory should be tentatively accepted as part of the “body of systematized knowledge concerning what is.”
Article
The paper studies two-person supergames. Each player is restricted to carry out his strategies by finite automata. A player's aim is to maximize his average payoff and subject to that, to minimize the number of states of his machine. A solution is defined as a pair of machines in which the choice of machine is optimal for each player at every stage of the game. Several properties of the solution are studied and are applied to the repeated prisoner's dilemma. In particular it is shown that cooperation cannot be the outcome of a solution of the infinitely repeated prisoner's dilemma.
Article
This study is a follow-on effort to a recently completed project, sponsored by the Commanding General, Marine Corps Combat Development Command, that assessed the general applicability of the new sciences to land warfare. "New Sciences" is a catch-all phrase that refers to the tools and methodologies used in nonlinear dynamics and complex systems theory to study physical systems that exhibit a "complicated dynamics." CNA is currently developing a multiagent-based simulation of notional combat called ISAAC (Irreducible Semi-Autonomous Adaptive Combat), a preliminary version of which is described in this report. ISAAC takes a bottom-up, synthesist approach to the modeling of combat, vice the more traditional top-down, or reductionist approach.