Watts and Strogatz's "small world model" of disordered networks is becoming an increasingly popular research tool for modeling human society. As part of this approach, local information mechanisms (landscape properties) are used to approximate real-world conditions in social simulations. The authors investigate the influence of local information on social simulations that are performed using the small world model. In addition to defining local information, we use a cellular automata variation with added shortcuts as a test platform for simulating the spread of an epidemic and examining various influences. We believe our results help future researchers determine appropriate simulation parameters.
In recent years there has been an explosion of published literature utilising Multi-Agent-Based Simulation (MABS) to study social, biological and artificial systems. This kind of work is evidenced within JASSS but is increasingly becoming part of mainstream practice across many disciplines. However, despite this plethora of interesting models, they are rarely compared, built-on or transferred between researchers. It would seem there is a dearth of "model-to-model" analysis. Rather researchers tend to work in isolation, designing all their models from scratch and reporting their results without anyone else reproducing what they found. Although the opposite extreme, where all that seems to happen is the next twist on an existing model, is not to be wished for, there are considerable dangers if everybody only works on their own model. Part of the reason for this is that models tend to be very seductive – especially to the person who has built the model. What is needed is a third person to check the results. However it is not always clear how people who are not the modeller can interpret or utilise such results, because it is very difficult to replicate simulation models from what is reported in papers. It was for these reasons that we called on the MABS community to submit papers for a model-to-model (M2M) workshop. The aim of the workshop was to gather researchers in MABS who were interested in understanding and furthering the transferability of knowledge between models. We received fourteen submissions from which (after a process of peer review) eight were presented at the workshop. Of the six articles that comprise this special issue, five were presented at the workshop.
Agent-based models of political party competition in a multidimensional policy space have been developed in order to reflect adaptive learning by party leaders with very limited information feedback. The key assumption is that two categories of actors continually make decisions: voters choose which party to support and party leaders offer citizens a certain policy package. After reviewing the arguments for using agent-based models, I elaborate two ways forward in the development of these models for political party competition. Firstly, theoretical progress is made in this article by taking the role of the mass media into account. In previous work it is implicitly assumed that all parties are equally visible for citizens, whereas I will start from the more realistic assumption that there is also competition for attention in the public sphere. With this addition, it is possible to address the question why new parties are seldom able to successfully compete with political actors already within the political system. Secondly, I argue that, if we really want to learn useful lessons from simulations, we should seek to empirically falsify models by confronting outcomes with real data. So far, most of the agent-based models of party competition have been an exclusively theoretical exercise. Therefore, I evaluate the empirical relevance of different simulations of Dutch party competition in the period from May 1998 until May 2002. Using independent data on party positions, I measure the extent to which simulations generate mean party sizes that resemble public opinion polls. The results demonstrate that it is feasible and realistic to simulate party competition in the Netherlands with agent-based models, even when a rather unstable period is investigated.
In the 1990s, Agent-Based Modeling (ABM) began gaining popularity and represents a departure from the more classical simulation approaches. This departure, its recent development and its increasing application by non-traditional simulation disciplines indicates the need to continuously assess the current state of ABM and identify opportunities for improvement. To begin to satisfy this need, we surveyed and collected data from 279 articles from 92 unique publication outlets in which the authors had constructed and analyzed an agent-based model. From this large data set we establish the current practice of ABM in terms of year of publication, field of study, simulation software used, purpose of the simulation, acceptable validation criteria, validation techniques and complete description of the simulation. Based on the current practice we discuss six improvements needed to advance ABM as an analysis tool. These improvements include the development of ABM specific tools that are independent of software, the development of ABM as an independent discipline with a common language that extends across domains, the establishment of expectations for ABM that match their intended purposes, the requirement of complete descriptions of the simulation so others can independently replicate the results, the requirement that all models be completely validated and the development and application of statistical and non-statistical validation techniques specifically for ABM.
Multi-Agent Based Simulation (MABS) workshop is a key event for the researchers working in the field of agent based simulations because it favours the encounter between researchers woking on applied social simulations who are interested on computer science aspects (architectures, platforms, methodologies) on the one hand, and computer scientists willing to understand how agent based models can be modelled from the observation of the real world.
The article describes how and why we failed to replicate main effects of a computational model that Michael Macy and Yoshimichi Sato published in the Proceedings of the National Academy of Sciences (May 2002). The model is meant to answer a fundamental question about social life: Why, when and how is it possible to build trust with distant people? Based on their model, Macy and Sato warn the US society about an imminent danger: the possible break down of trust caused by too much social mobility. But the computational evidence for exactly that result turned out not to be replicable.
Many policy problems in the field of urban planning and traffic management are to be characterized as ?ill structured problems?, i.e. there is little consensus about ?goals and facts?. The policy making process for such problems is a learning process, a continuous search for acceptable goals and relevant knowledge. In the past year we have been asked to help facilitate a policy making process, aimed at solving such an ?ill structured problem?: the congestion problems in and around middle scale cities which are also faced with a substantial challenges for spatial planning. In our workshop design we used gaming techniques combined with a traffic simulation modell (Paramics) and our spatial designing tool Smartmap which is facilitated with an interactive white board (Smartboard). The simulation workshop was successfully tried out on a group of representatives who are usually involved in regional traffic and planning problems. Representatives of the national government as well as Chamber of Commerce, environmental groups, local governments, transportation enterprises, employers and consumer organizations were present, each playing a different role. We used a fictional, non existing region (Maasmere) and fictitious roles. Yet the simulated problems in this region and the present roles in the game are derived from real life sitautions and thus recognizable for the participants. In short, Spelaanpak Route 26 (in english Gaming approach Route 26) uses computer simulation and designing tools for which input is generated by the social interaction between group members. They have to negotiate the input for the simulation runs and for the spatial designs. The experiences of this try out are being used to improve and sharpen our design. E.G. we have learned that the planning focus (short term vs. long term) is a crucial bottleneck in solving the problems described above. In the improved version of the simulation workshop, the short term and long term planning assignments haven been given a more important place. It is our aim to apply our simulation workshop to a real situation: a Dutch region with actual congestion and planning problems which solution involve a pluriform participation from the policy network. In our opinion the simulation workshop can play a role in the policy process phase of problemstructuring. The simulation approach will lead to identification of the main policy issues: what are the percieved problems, can we reach an agreement on what our problem is, what are possible and acceptable policy options, and so on.
Our GeoGraph 3D extensions to the RePast agent-based simulation platform support models in which mobile agents travel and interact on rugged terrain or on network landscapes such as social networks of established organizational teams or spatial networks at any scale from rooms within buildings to urban neighborhoods to large geographic networks of cities. Interactive GeoGraph 3D visualizations allow researchers to zoom and pan within the simulation landscape as the model runs. Model-specific 3D representations of agents flock together on terrain landscapes, and teleport or travel along links on network landscapes. Agents may be displayed on network nodes either as individual agents or as dynamic 3D bar charts that reflect the composition of each node's population. Batch modes support scientific control via fully separated random number series, customized parameter combinations, and automatic data collection for many thousands of simulation runs. This paper introduces the GeoGraph 3D computational laboratory and briefly describes three representative GeoGraph models along with basic GeoGraph 3D capabilities and components.
This paper presents an Agent-based LOcation Model (ABLOoM). ABLOoM simulates the location decisions of two main types of agents, namely households and firms. The model contains multiple interactions that are crucial in understanding land use changes, such as interactions of agents with other agents, of agents with their environment and of agents with emerged patterns. In order to understand the mechanisms that are at the basis of land use changes and the formation of land use patterns, ABLOoM allows us to study human behaviour at the microlevel in a spatial context. The models, which include economic theory, aspects of complexity theory and decision rules, show that it is possible to generate macrolevel land use patterns from microlevel spatial decision rules.
Although the majority of researchers interested in ABM increasingly agree that the most natural way to program their models is to adopt OO practices, UML diagrams are still largely absent from their publications. In the last 15 years, the use of UML has risen constantly, to the point where UML has become the de facto standard for graphical visualization of software development. UML and its 13 diagrams has many universally accepted virtues. Most importantly, UML provides a level of abstraction higher than that offered by OO programming languages (Java, C++, Python, .Net ...). This abstraction layer encourages researchers to spend more time on modeling rather than on programming. This paper initially presents the four most common UML diagrams - class, sequence, state and activity diagrams (based on my personal experience, these are the most useful diagrams for ABM development). The most important features of these diagrams are discussed, and explanations based on conceptual pieces often found in ABM models are given of how best to use the diagrams. Subsequently, some very well known and classical ABM models such as the Schelling segregation model, the spatial evolutionary game, and a continuous double action free market are subjected to more detailed UML analysis.
We refine a prominent set of template models for agent-based modeling, and we offer new reference implementations. We also address some issues of design, flexibility, and ease of use that are relevant to the choice of an agent-based modeling platform.
In this work simulation-based and analytical results on the emergence steady states in traffic-like interactions are presented and discussed. The objective of the paper is twofold: i) investigating the role of social conventions in coordination problem situations, and more specifically in congestion games; ii) comparing simulation-based and analytical results to figure out what these methodologies can tell us on the subject matter. Our main issue is that Agent-Based Modelling (ABM) and the Equation-Based Modelling (EBM) are not alternative, but in some circumstances complementary, and suggest some features distinguishing these two ways of modeling that go beyond the practical considerations provided by Parunak H.V.D., Robert Savit and Rick L. Riolo. Our model is based on the interaction of strategies of heterogeneous agents who have to cross a junction. In each junction there are only four inputs, each of which is passable only in the direction of the intersection and can be occupied only by an agent one at a time. The results generated by ABM simulations provide structured data for developing the analytical model through which generalizing the simulation results and make predictions. ABM simulations are artifacts that generate empirical data on the basis of the variables, properties, local rules and critical factors the modeler decides to implement into the model; in this way simulations allow generating controlled data, useful to test the theory and reduce the complexity, while EBM allows to close them, making thus possible to falsify them.
Modelling urban land use change can foster understanding of underlying processes and is increasingly realized using agent-based models (ABM) as they allow for explicitly coding land management decisions. However, urban land use change is the result of interactions of a variety of individuals as well as organisations. Thus, simulation models on urban land use need to include a diversity of agent types which in turn leads to complex interactions and coding processes. This paper presents the new ABMland tool which can help in this process: It is software for developing agent-based models for urban land use change within a spatially explicit and joint environment. ABMland allows for implementing agent-based models and parallel model development while simplifying the coding process. Six major agent types are already included as coupled models: residents, planners, infrastructure providers, businesses, developers and lobbyists. Their interactions are pre-defined and ensure valid communication during the simulation. The software is implemented in Java building upon Repast Simphony and other libraries.
The issues of empirical calibration of parameter values and functional relationships describing the interactions between the various actors plays an important role in agent based modelling. Agent-based models range from purely theoretical exercises focussing on the patterns in the dynamics of interactions processes to modelling frameworks which are oriented closely at the replication of empirical cases. ABMs are classified in terms of their generality and their use of empirical data. In the literature the recommendation can be found to aim at maximizing both criteria by building so-called 'abductive models'. This is almost the direct opposite of Milton Friedman's famous and provocative methodological credo 'the more significant a theory, the more unrealistic the assumptions'. Most methodologists and philosophers of science have harshly criticised Friedman's essay as inconsistent, wrong and misleading. By presenting arguments for a pragmatic reinterpretation of Friedman's essay, we will show why most of the philosophical critique misses the point. We claim that good simulations have to rely on assumptions, which are adequate for the purpose in hand and those are not necessarily the descriptively accurate ones.
Is it possible to abstract a formal mechanism originating schisms and
governing the size evolution of social conversations? In this work a
constructive solution to such problem is proposed: an abstract model of a
generic N-party turn-taking conversation. The model develops from simple yet
realistic assumptions derived from experimental evidence, abstracts from
conversation content and semantics while including topological information, and
is driven by stochastic dynamics. We find that a single mechanism - namely the
dynamics of conversational party's individual fitness, as related to
conversation size - controls the development of the self-organized schisming
phenomenon. Potential generalizations of the model - including individual
traits and preferences, memory effects and more elaborated conversational
topologies - may find important applications also in other fields of research,
where dynamically-interacting and networked agents play a fundamental role.
We consider here issues of open access to social simulations, with a particular focus on software licences, though also briefly discussing documentation and archiving. Without any specific software licence, the default arrangements are stipulated by the Berne Convention (for those countries adopting it), and are unsuitable for software to be used as part of the scientific process (i.e. simulation software used to generate conclusions that are to be considered part of the scientific domain of discourse). Without stipulating any specific software licence, we suggest rights that should be provided by any candidate licence for social simulation software, and provide in an appendix an evaluation of some popularly used licences against these criteria.
This paper interprets a particular agent-based social simulation (ABSS) in terms of the third way of understanding agent-based simulation proposed by Conte. It is proposed that the normalized compression distance (derived from estimates of Kolmogorov complexity) between the initial and final macrolevel states of the ABSS provides a quantitative measure of the degree to which the results obtained via the ABSS might be obtained via a closed-form expression. If the final macrolevel state of an ABSS can only be obtained by simulation, this confers on agent-based social simulations a special status. Future empirical (computational) work and epistemological analyses are proposed.
We compare the individual-based \'threshold model\' of innovation diffusion in the version which has been studied by Young (1998), with an aggregate model we derived from it. This model allows us to formalise and test hypotheses on the influence of individual characteristics upon global evolution. The classical threshold model supposes that an individual adopts a behaviour according to a trade-off between a social pressure and a personal interest. Our study considers only the case where all have the same threshold. We present an aggregated model, which takes into account variations of the neighbourhood sizes, whereas previous work assumed this size fixed (Edwards et al. 2003a). The comparison between the aggregated models (the first one assuming a neighbourhood size and the second one, a variable one) points out an improvement of the approximation in most of the value of parameter space. This proves that the average degree of connectivity (first aggregated model) is not sufficient for characterising the evolution, and that the node degree variability has an impact on the diffusion dynamics. Remaining differences between both models give us some clues about the specific ability of individual-based model to maintain a minority behaviour which becomes a majority by an addition of stochastic effects.
We propose an advanced agent-based modelling approach to ecosystem management, informed and motivated by consideration of the Fraser River watershed and its management problems. Agent-based modelling is introduced, and a three-stage computer-based research programme is formulated, the focus of which is on how best to intervene to cause stakeholders to co-operate effectively in ecosystem management, and on the objective discovery and comparison of intervention strategies by way of computer experimentation. The agent-based model outlined is technically relatively complex, and several potential difficulties in its detailed development are discussed. Types of ecosystem intervention strategy that might plausibly be discovered or recommended by the model are projected and compared with those currently advocated in the literature.
We investigate various strategies for stopping games embedded in the larger context of an artificial life simulation, where agents compete for food in order to survive and have offspring. In particular, we examine the utility of letting agents display their action tendencies (e.g., "continue to play" vs. "quitting the game" at any given point in the game), which agents can take into account when making their decisions. We set up a formal framework for analyzing these "embedded stopping games" and present results from several simulation studies with different kinds of agents. Our results indicate that while making use of action tendency cues is generally beneficial, there are situations in which agents using stochastic decision mechanisms perform better than agents whose decisions are completely determined by their own and their opponents' displayed tendencies, particularly when competing with agents who lie about their action tendencies.
This paper presents an agent-based simulation model of protest activity. Agents are located in a two dimensional grid and have limited ability to observe the behavior of other agents in the grid. The model is used to explore questions inspired by research on different theories of individual motivation and the so-called theory of critical mass. The simulations describe individuals who support an effort to change a policy, but acting in support of that effort is costly. When the marginal effect of participation reaches a certain level, people are more likely to get involved. With certain configurations of parameter values, the simulations produce no sustained widespread participation in protest regardless of the presence of activists; under other conditions high levels of protest are usually sustained, even without activists. However, the addition of a surprisingly small group of activists radically changes the aggregate behavior of the model under some conditions, making high and sustained protest possible when it otherwise would not have been.
While student populations in higher education are becoming more heterogeneous, recently several attempts have been made to introduce online peer support to decrease the tutor load of teachers. We propose a system that facilitates synchronous online reciprocal peer support activities for ad hoc student questions: the Synchronous Allocated Peer Support (SAPS) system. Via this system, students with questions during their learning are allocated to competent fellow-students for answering. The system is designed for reciprocal peer support activities among a group of students who are working on the same fixed modular material every student has to finish, such as courses with separate chapters. As part of a requirement analysis of online reciprocal peer support to succeed, this chapter is focused on the second requirement of peer competence and sustainability of our system. Therefore a study was conducted with a simulation of a SAPS-based allocation mechanism in the NetLogo simulation environment and focuses on the required minimum population size, the effect of the addition of extra allocation parameters or disabling others on the mechanism\'s effectiveness, and peer tutor load spread in various conditions and its influence on the mechanism\'s effectiveness. The simulation shows that our allocation mechanism should be able to facilitate online peer support activities among groups of students. The allocation mechanism holds over time and a sufficient number of students are willing and competent to answer fellow-students\' questions. Also, fine-tuning the parameters (e.g. extra selection criteria) of the allocation mechanism further enhances its effectiveness.
An agent-based model of firms and their stakeholders' economic actions was used to test the theoretical feasibility of sustainable corporate social responsibility activities. Corporate social responsibility has become important to many firms, but CSR activities tend to get less attention during busts than during boom times. The hypothesis tested is that the CSR activities of a firm are more economically rational if the economic actions of its stakeholders reflect the firm's level of CSR. Our model focuses on three types of stakeholders: workers, consumers, and shareholders. First, we construct a uniform framework based on a microeconomic foundation that includes these stakeholders and the corresponding firms. Then, we formulate parameters for CSR in this framework. Our aim is to identify the conditions under which every type of stakeholder derives benefits from a firm's CSR activities. We simulated our model with heterogeneous agents by computer using several scenarios. For each one, the simulation was run 100 times with different random seeds. We first simulated the homogeneous version discussed above to verify the concept of our model. Next, we simulated the case in which workers had heterogeneous abilities, the firms had cost for CSR activities, and the workers, consumers, and shareholders had zero CSR awareness. We tested the robustness of our simulation results by using sensitivity analysis. Specifically, we investigated the conditions for the pecuniary advantage of CSR activities and effects offsetting benefits of CSR activities. Finally, we developed a new model installed bounded rational and simulated. The results show that the economic actions of stakeholders during boom periods greatly affect the sustainability of CSR activities during slow periods. This insight should lead to a feasible and effective prescription for sustainable CSR activities.
Would society be better off, in aggregate economic terms, if altruism was more widely practiced among its members? Here I try to answer this question using an agent based computer simulation model of a simple agricultural society. A Monte Carlo exploration of the parameter landscapes allowed the exploration of the range of possible situations of conflict between the individual and the group. The possible benefit of altruism on the aggregate wealth of society was assessed by comparing the overall efficiency of the system in accumulating aggregate utility in simulations with altruistic agents, and with equivalent systems where no altruistic acts were allowed. The results show that no simple situation could be found where altruistic behavior was beneficial to the group. Dissipative and equitative altruistic behavior was detrimental to the aggregate wealth of the group or was neutral. However, the modeling of non-economic factors or the inclusion of a synergic effect in the mutualistic interactions did increase the aggregated utility achieved by the virtual society.
The voting patterns in the Eurovision Song Contest have attracted attention from various researchers, spawning a small cross-disciplinary field of what might be called 'eurovisiopsephology' incorporating insights from politics, sociology and computer science. Although the outcome of the contest is decided using a simple electoral system, its single parameter - the number of countries casting a vote - varies from year to year. Analytical identification of statistically significant trends in voting patterns over a period of several years is therefore mathematically complex. Simulation provides a method for reconstructing the contest's history using Monte Carlo methods. Comparison of simulated histories with the actual history of the contest allows the identification of statistically significant changes in patterns of voting behaviour, without requiring a full mathematical solution. In particular, the period since the mid-90s has seen the emergence of large geographical voting blocs from previously small voting partnerships, which initially appeared in the early 90s. On at least two occasions, the outcome of the contest has been crucially affected by voting blocs. The structure of these blocs implies that a handful of centrally placed countries have a higher probability of being future winners.
One challenge to researchers dealing with traffic management is to find efficient ways to model and predict traffic flow. Due to the social nature of traffic, most of the decisions are not independent. Thus, in traffic systems the inter-dependence of actions leads to a high frequency of implicit co-ordination decisions. Although there are already systems designed to assist drivers in these tasks (broadcast, Internet, etc.), such systems do not consider or even have a model of the way drivers decide. Our research goal is the study of commuting scenarios, drivers' decision-making, its influence on the system as a whole, and how simulation can be used to understand complex traffic systems. The present paper addresses two key issues: simulation of driver decision-making, and the role of a traffic forecast component. The former is realised by a naïve model for the route choice adaptation, where commuters behaviour is based on heuristics they evolve. The second issue is realised via a traffic control system which perceives drivers' decisions and returns a forecast, thus allowing drivers to decide the actual route selection. For validation, we use empirical data from real experiments and show that the heuristics drivers evolve lead to a situation similar to that obtained in the real experiments. As for the forecast scenario, our results confirm that a traffic system in which a large share of drivers reacts to the forecast will not develop into equilibrium. However, a more stable situation arises by introducing some individual tolerance to sub-optimal forecasts.
Religious people talk about things that cannot be seen, stories that cannot be verified, and beings and forces beyond the ordinary. Perhaps their gods are truly at work, or perhaps in human nature there is an impulse to proclaim religious knowledge. If so, it would have to have arisen by natural selection. It is hard to imagine how natural selection could have produced such an impulse. There is a debate among evolutionary scientists about whether or not there is any adaptive advantage to religion at all (Bulbulia 2004a; Atran and Norenzayan 2004). Some believe that it has no adaptive value itself and that it is just a hodge podge of of behaviors that have evolved because they are adaptive in other non-religious contexts. The agent-based simulation described in this article shows that a central unifying feature of religion, a belief in an unverifiable world, could have evolved along side of verifiable knowledge. The simulation makes use of an agent-based communication model with two types of information: verifiable information (real information) about a real world and unverifiable information (unreal information) about about an imaginary world. It examines the conditions necessary for the communication of unreal information to evolved along side the communication of real information. It offers support for the theory that religion is an adaptive complex and it disputes the theory that religion is a byproduct of unrelated adaptive processes.
Industrial Districts (IDs) are complex productive systems based on an evolutionary network of heterogeneous, functionally integrated and complementary firms, which are within the same market and geographical space. Setting up a prototype, able to reproduce an idealised ID, we model cognitive processes underlying the behaviour of ID firms. ID firms are bounded rationality agents, able to process information coming from technology and market environment and from their relational contexts. They are able to evaluate such information and to transform it into courses of action, routinising their choices, monitoring the environment, categorising, typifying and comparing information. But they have bounded cognitive resources: attention, time and memory. We test two different settings: the first one shows ID firms behaving according to a self-centred attitude, while the second one shows ID firms behaving according to a social centred attitude. We study how such a strong difference at micro-level can affect at macro-level the technological adaptation of IDs.
The study refers to the interactions between socio-economic and natural dynamics in an island biosphere reserve by using companion modelling. This approach provides scientific results and involves interdisciplinarity. In the second phase of the study, we transferred knowledge by adapting the main research output, a role-playing game, to young people. Our goal was to introduce interactions between social and ecological systems, coastal dynamics and integrated management. Adapting the game required close collaboration between the scientists and educators in order to transform both its substance and form and to run it with an easy-to-handle ergonomic platform.
This article describes a social simulation model based on an economic experiment about altruistic behavior. The experiment by Fehr and Gächter showed that participants made frequent use of costly punishment in order to ensure continuing cooperation in a common pool resource game. The model reproduces not only the aggregated but also the individual data from the experiment. It was based on the data rather than theory. By this approach new insights about human behaviour and decision making may be found. The model was not designed as a stand-alone model, but as a starting point for a comprehensive Adaptive Toolbox Model. This may form a framework for modelling results from different economic experiments, comparing results and underlying assumptions, and exploring whether the insights thus gained also apply to more realistic situations.
Governments have come under increasing pressure to promote horizontal flows of information across agencies, but investment in cross-agency interoperable and standard systems have been minimally made since it seems to require government agencies to give up the autonomies in managing own systems and its outcomes may be subject to many external and interaction risks. By producing an agent-based model using 'Blanche' software, this study provides policy-makers with a simulation-based demonstration illustrating how government agencies can autonomously and interactively build, standardize, and operate interoperable IT systems in a decentralized environment. This simulation designs an illustrative body of 20 federal agencies and their missions. A multiplicative production function is adopted to model the interdependent effects of heterogeneous systems on joint mission capabilities, and six social network drivers (similarity, reciprocity, centrality, mission priority, interdependencies, and transitivity) are assumed to jointly determine inter-agency system utilization. This exercise simulates five policy alternatives derived from joint implementation of three policy levers (IT investment portfolio, standardization, and inter-agency operation). The simulation results show that modest investments in standard systems improve interoperability remarkably, but that a wide range of untargeted interoperability with lagging operational capabilities improves mission capability less remarkably. Nonetheless, exploratory modeling against the varying parameters for technology, interdependency, and social capital demonstrates that the wide range of untargeted interoperability responds better to uncertain future states and hence reduces the variances of joint mission capabilities. In sum, decentralized and adaptive investments in interoperable and standard systems can enhance joint mission capabilities substantially and robustly without requiring radical changes toward centralized IT management.
This paper describes the development of a series of intelligent agent simulations based on data from previously documented common pool resource (CPR) experiments. These simulations are employed to examine the effects of different institutional configurations and individual behavioral characteristics on group level performance in a commons dilemma. Intelligent agents were created to represent the actions of individuals in a CPR experiment. The agents possess a collection of heuristics and utilize a form of adaptation by credit assignment in which they select the heuristic that appears to yield the highest return under the current circumstances. These simulations allow the analyst to specify the precise initial configuration of an institution and an individual's behavioral characteristics, so as to observe the interaction of the two and the group level outcomes that emerge as a result. Simulations explore settings in which there is no communication between agents, as well as the relative effects on overall group behavior of two different communication routines. The behavior of these simulations is compared with documented CPR experiments. Future directions in the development of the technology are outlined for natural resource management modeling applications.
This article has two primary objectives: (i) to replicate an agent-based model of social interaction by Bhavnani (2003), in which the author explicitly specifies mechanisms underpinning Robert Putnam\'s (1993) work on Civic Traditions in Modern Italy, bridging the gap between the study\'s historical starting point—political regimes that characterized 14th Century Italy—and contemporary levels of social capital—reflected in a \'civic\' North and an \'un-civic\' South; and (ii) to extend the original analysis, using a landscape of Italy that accounts for population density. The replication exercise is performed by different authors using an entirely distinct ABM toolkit (PS-I) with its own rule set governing agent-interaction and cultural change. The extension, which more closely approximates a docking exercise, utilizes equal area cartograms otherwise known as density-equalizing maps (Gastner and Newman 2004) to resize the territory according to 1993 population estimates. Our results indicate that: (i) using the criterion of distributional equivalence, we experience mixed success in replicating the original model given our inability to restrict the selection of partners to \'eligible\' neighbors and limit the number of agent interactions in a timestep; (ii) increasing the number of agents and introducing more realistic population distributions in our extension of the replication model increases distributional equivalence; (iii) using the weaker criteria of relational alignment, both the replication model and its extension capture the basic relationship between institutional effectiveness and civic change, the effect of open boundaries, historical shocks, and path dependence; and (iv) that replication and docking may be usefully combined in model-to-model analysis, with an eye towards verification, reimplementation, and alignment.
This paper models a supply network as a complex adaptive system (CAS), in which firms or agents interact with one another and adapt themselves. And it applies agent-based social simulation (ABSS), a research method of simulating social systems under the CAS paradigm, to observe emergent outcomes. The main purposes of this paper are to consider a social factor, trust, in modeling the agents\' behavioral decision-makings and, through the simulation studies, to examine the intermediate self-organizing processes and the resulting macro-level system behaviors. The simulations results reveal symmetrical trust levels between two trading agents, based on which the degree of trust relationship in each pair of trading agents as well as the resulting collaboration patterns in the entire supply network emerge. Also, it is shown that agents\' decision-making behavior based on the trust relationship can contribute to the reduction in the variability of inventory levels. This result can be explained by the fact that mutual trust relationship based on the past experiences of trading diminishes an agent\'s uncertainties about the trustworthiness of its trading partners and thereby tends to stabilize its inventory levels.
This is an interim report on a set of ongoing agent-based simulation experiments. The main goal of the experiments is to evaluate the impact of varying memory capacities on the ability of agents to adapt to given subsistence environments and resource landscapes. The results so far suggest that increasing the number of events which an agent can use in decision making does not directly lead to an increase in the agent’s ability to adapt to an environment. Variable memory capacity, however, does lead to diversification of agent behaviour patterns in a given environment. The results also suggest that more permissive environments allow agents with greater memory to show a greater diversity of behaviour than similar agents in more restrictive environments. These early results are providing the starting hypotheses which are at the core of a larger set of experiments presently being carried out. The simulation engine and source code are available from the author.
Long duration historical studies have been formative in shaping comparative analysis. Yet historical processes are notoriously difficult to study, and their findings equally difficult to validate empirically. In this paper, I take Robert Putnam’s work on Civic Traditions in Modern Italy and attempt to bridge the gap between the study’s historical starting point and contemporary observations, using an agent-based model of social interaction. My use of a computational model to study historical processes—in this case the inculcation and spread of social capital—supports Putnam's claim of path dependence. Moving beyond Putnam’s study, my results indicate that the formation of civic (or uncivic) communities is not deterministic, that their emergence is sensitive to historical shocks, and that the absence of political boundaries lowers aggregate levels of civicness in regions characterized by effective institutions. In addition, the simulation suggests that minor improvement to ineffective institutions—making them moderately effective—constitute a mid-level equilibrium trap with the least desirable social consequences.
This paper makes use of an adaptive agent framework to extend traditional models of comparative advantage in international trade, illustrating several cases which make theoretical room for industrial policy and the regulation of trade. Using an agent based implementation of the Hecksher-Ohlin trade model, the paper confirms Samuelsonâ€™s 2004 result demonstrating that the principle of comparative advantage does not ensure that technological progress in one country benefits its trading partners. It goes on to demonstrate that the presence of increasing returns leads to a situation with multiple equilibra, where free market trading policies can not be relied on to deliver an outcome which is efficient or equitable, with first movers in development enjoying permanent advantage over later developing nations. Finally, the paper examines the impact of relaxation of the Ricardian assumption of capital immobility on the principle of comparative advantage. It finds that the dynamics of factor trade are radically different from the dynamics of trade in goods and that factor mobility converts a regime of comparative advantage into a regime of absolute advantage, thus obviating the reassuring equity results which stem from comparative advantage.
The classical theory of computation does not represent an adequate model of reality for simulation in the social sciences. The aim of this paper is to construct a methodological perspective that is able to conciliate the formal and empirical logic of program verification in computer science, with the interpretative and multiparadigmatic logic of the social sciences. We attempt to evaluate whether social simulation implies an additional perspective about the way one can understand the concepts of program and computation. We demonstrate that the logic of social simulation implies at least two distinct types of program verifications that reflect an epistemological distinction in the kind of knowledge one can have about programs. Computer programs seem to possess a causal capability (Fetzer, 1999) and an intentional capability that scientific theories seem not to possess. This distinction is associated with two types of program verification, which we call empirical and intentional verification. We demonstrate, by this means, that computational phenomena are also intentional phenomena, and that such is particularly manifest in agent-based social simulation. Ascertaining the credibility of results in social simulation requires a focus on the identification of a new category of knowledge we can have about computer programs. This knowledge should be considered an outcome of an experimental exercise, albeit not empirical, acquired within a context of limited consensus. The perspective of intentional computation seems to be the only one possible to reflect the multiparadigmatic character of social science in terms of agent-based computational social science. We contribute, additionally, to the clarification of several questions that are found in the methodological perspectives of the discipline, such as the computational nature, the logic of program scalability, and the multiparadigmatic character of agent-based simulation in the social sciences.
When are we locked in a path? This is one of the main questions concerning path dependency. Coming from Arthur\'s model of increasing returns and technology adoption (Arthur 1989), this paper addresses the question of when and how a lock-in occurs. To gain a better understanding of the path process, different modifications are made. First, the random selection of two types of adopters is substituted with a random selection of adopters having a Gaussian distributed natural inclination. Second, Arthur\'s model shows only indirect network effects, so direct network effects are added to the model. Furthermore, it is shown that there is an asymptotic lock-in function referring to the technology A and B adopter ratio; this ratio is calculated within the process on the basis of a returning probability to an open state. In the following, the developed model is used to simulate path process without increasing returns, with increasing returns stopping when a lock-in occurs, as well as random drop-outs of increasing returns. One answer that could be drawn out of this new extended model is that there is no lock-in without further stabilizing returns. This and other aspects are used to provide a simplified path-model for empirical research. Finally, its limits are discussed in regard to uncertainty, innovation, and changes in network effect parameters.
We investigate in this paper the problem of the dynamics of adoption of electronic commerce, focusing on agents’ expertise and learning patterns in using e-markets. Traders face a double uncertainty. In the traditional channel, intrinsic frictions make uncertain the implementation of an exchange, but the quality of the traded items is perfectly observable. In the electronic channel, exchange always occurs but agents imperfectly appreciate items' quality, because traders and traded commodities are inaccurately observable on e-markets. We develop in this paper a simulation model to deal with this issue. In this model, traders are heterogeneous regarding both their preferences and their ability to trade on the electronic market. We successively sketch two scenarii: i) We first analyse a situation where learning is strictly individual. A trial-and-errors learning pattern may have an ambiguous impact. On the one hand, it is found to improve the diffusion of electronic markets. On the other hand, it may be responsible of a new source of inequalities because some agents may not be able to trade in the electronic channel and are hence excluded from the use of this market either in the short run (temporal unemployment) or in the long run (structural unemployment). As learning is imperfect, the economy converges to a situation where the two markets coexist, inducing coordination costs (frictional unemployment). ii) We then extend our results by exploring the effects of community-based learning practices: such practices are found to enhance the adoption of the electronic channel although inequalities among agents may increase.
When advising policy we face the fundamental problem that economic processes are uncertain. Consequently, policy can err. In this paper we show how the use of simulation models can reduce policy errors by inferring empirically reliable and meaningful statements about economic processes. We suggest that policy is best based on so-called abductive simulation models, which help to better understand how policy measures can influence economic processes. We show that abductive simulation models use a combination of theoretical and empirical analysis based on different data sets. By way of example we show what policy can learn with the help of abductive simulation models, namely how policy measures can influence the emergence of a regional cluster.
In this work we propose a new model for spatial games. We present a definition of mobility in terms of the satisfaction an agent has with its spatial location. Agents compete for space through a non-cooperative game by using mixed strategies. We are particularly interested in studyig the relation between Nash equilibrium and the winner strategy of a given model with mobility, and how the mobility can affect the results. The experiments show that mobility is an important variable concerning spatial games. When we change parameters that affect mobility, it may lead to the success of strategies away from Nash equilibrium.
Research about resilience on complex systems has been commonly addressed from a structural point of view, relating this concept to the preservation of the connectivity against the suppression of individual nodes or individual links. This perspective coherently encompasses the analysis of resistance of networked infrastructures to structural damage (e.g. power grids, roads and communication networks) but not necessarily other scenarios (e.g. socio-ecological systems). Here we associate the resilience concept to the capability of a social organization to keep acceptable levels of functionality against external socio-economic disrupting factors that do not imply necessarily destruction of existing links. As a particular case of study, we show how diversity of the organizational characteristics improves resilience of regional innovation systems to uncertain socio-economic situations. Particularly speaking, we will deal with models were network structure is as important as the diversity of behaviours in agents decisions. We reanalyze the conclusions of a classical text about regional development (Saxenian 1994), comparing the evolution of two industrial districts (Silicon Valley and Boston\'s Route 128), by first making a qualitative analogy in terms of resilience and, second, building up a simplified model of innovation systems that support quantitatively our argumentation. The methodology presented in this paper, based on a simple network model designed from the qualitative conclusions of previous works about industrial networks, allows us to translate abstracted theoretical evidences on networks in more specified scenarios, and can contribute fruitfully to this line of research.
By means of a simulated funding-agency/supported-firm stochastic dynamic game, this paper shows that the level of the subsidy provided by a funding (public) agency, normally used to correct for firm R&D shortage, might be severely underprovided. This is due to the "externalities" generated by the agency-firm strategic relationship, as showed by comparing two versions of the model: one assuming "rival" behaviors between companies and agency (i.e., the current setting), and one associated to the "cooperative" strategy (i.e. the optimal Pareto-efficient benchmark). The paper looks also at what "welfare" implications are associated to different degrees of persistency in the funding effect on corporate R&D. Three main conclusions are thus drawn: (i) the relative quota of the subsidy to R&D is undersized in the rival compared to the cooperative model; (ii) the rivalry strategy generates distortions that favor the agency compared to firms; (iii) when passing from less persistent to more persistent R&D additionality/crowding-out effect, the lower the distortion the greater the variance is and vice versa. As for the management of R&D funding policies, we suggest that all the elements favouring greater collaboration between agency and firm objectives may help current R&D support to approach its social optimum.
As agricultural and environmental issues are more and more inter-linked, the increasing multiplicity of stakeholders, with differing and often conflicting land use representations and strategies, underlines the need for innovative methods and tools to support their coordination, mediation and negotiation processes aiming at an improved, more decentralized and integrated natural resources management. But how can technology fit best with such a novel means of support? Even the present participatory modeling method is not really designed to avoid this technocratic drift and encourage the empowerment of stakeholders in the land use planning process. In fact, to truly integrate people and principals in the decision-making process of land use management and planning, information technology should not only support a mere access to information but also help people to participate fully in its design, process and usage. That means allow people to use the modeling support not to provide solutions, but to help people to steer their course within an incremental, iterative, and shared decision-making process. To this end, since 1997 we have experimented at an operational level (2500 km_) in the Senegal River valley a Self-Design Method that places modeling tools at stakeholders? and principals' disposal, right from the initial stages. The experiment presented here links Multi-Agent Systems and Role-Playing Games within a self-design and use process. The main objective was to test direct modeling design of these tools by stakeholders, with as little prior design work by the modeler as possible. This "self-design" experiment was organized in the form of participatory workshops which has led on discussions, appraisals, and decisions about planning land use management, already applied two years after the first workshops.
Certain social preference models have been proposed to explain fairness behavior in experimental games. Existing bodies of research on evolutionary games, however, explain the evolution of fairness merely through the self-interest agents. This paper attempts to analyze the ultimatum game's evolution on complex networks when a number of agents display social preference. Agents' social preference is modeled in three forms: fairness consideration or maintaining a minimum acceptable money level, inequality aversion, and social welfare preference. Different from other spatial ultimatum game models, the model in this study assumes that agents have incomplete information on other agents' strategies, so the agents need to learn and develop their own strategies in this unknown environment. Genetic Algorithm Learning Classifier System algorithm is employed to address the agents' learning issue. Simulation results reveal that raising the minimum acceptable level or including fairness consideration in a game does not always promote fairness level in ultimatum games in a complex network. If the minimum acceptable money level is high and not all agents possess a social preference, the fairness level attained may be considerably lower. However, the inequality aversion social preference has negligible effect on the results of evolutionary ultimatum games in a complex network. Social welfare preference promotes the fairness level in the ultimatum game. This paper demonstrates that agents' social preference is an important factor in the spatial ultimatum game, and different social preferences create different effects on fairness emergence in the spatial ultimatum game.
The paper deals with the use of empirical data in social science agent-based models. Agent-based models are too often viewed just as highly abstract thought experiments conducted in artificial worlds, in which the purpose is to generate and not to test theoretical hypotheses in an empirical way. On the contrary, they should be viewed as models that need to be embedded into empirical data both to allow the calibration and the validation of their findings. As a consequence, the search for strategies to find and extract data from reality, and integrate agent-based models with other traditional empirical social science methods, such as qualitative, quantitative, experimental and participatory methods, becomes a fundamental step of the modelling process. The paper argues that the characteristics of the empirical target matter. According to characteristics of the target, ABMs can be differentiated into case-based models, typifications and theoretical abstractions. These differences pose different challenges for empirical data gathering, and imply the use of different validation strategies.
The spatial pattern as described by von Thünen is considered an optimal solution to maximize society's well-being in a hypothetical environment. We developed a model to demonstrate whether a collection of autonomous individuals can contribute to the formation of this optimal pattern, without any system-level optimization capabilities. We also analyzed the mechanism that leads to an emergent spatial optimization by applying theories of positive feedbacks and lock-in.
Recent years have seen an increase in the application of ideas from the social sciences to computational systems. Nowhere has this been more pronounced than in the domain of multiagent systems. Because multiagent systems are composed of multiple individual agents interacting with each other many parallels can be drawn to human and animal societies. One of the main challenges currently faced in multiagent systems research is that of social control. In particular, how can open multiagent systems be configured and organized given their constantly changing structure? One leading solution is to employ the use of social norms. In human societies, social norms are essential to regulation, coordination, and cooperation. The current trend of thinking is that these same principles can be applied to agent societies, of which multiagent systems are one type. In this article, we provide an introduction to and present a holistic viewpoint of the state of normative computing (computational solutions that employ ideas based on social norms.) To accomplish this, we (1) introduce social norms and their application to agent-based systems; (2) identify and describe a normative process abstracted from the existing research; and (3) discuss future directions for research in normative multiagent computing. The intent of this paper is to introduce new researchers to the ideas that underlie normative computing and survey the existing state of the art, as well as provide direction for future research.
Agent-based modelling has become an increasingly important tool for scholars studying social and social-ecological systems, but there are no community standards on describing, implementing, testing and teaching these tools. This paper reports on the establishment of the Open Agent-Based Modelling Consortium, www.openabm.org, a community effort to foster the agent-based modelling development, communication, and dissemination for research, practice and education.