Figure 1 - uploaded by C. M. Macal
Content may be subject to copyright.

# Typical SIR model solution showing progression of population disease states for susceptible, infected, and recovered compartments. In this example, the entire population becomes infected and even- tually recovers.

Source publication
Conference Paper
Full-text available
Agent-based simulation (ABS) is a recent modeling technique that is being widely used in modeling complex social systems. Forrester's System Dynamics (SD) is another longstanding technique for modeling social systems. Several classical models of systems, such as the Kermack-McKendrick model of epidemiology, the Lotka-Volterra equations for modeling...

## Context in source publication

Context 1
... is related to the number of contacts an individual has with other individuals and the likelihood that an infected individuals transmits the infection to a susceptible individual upon contact, γ is the rate at which infected individuals recover from an infection, which is taken as 1/(mean duration of illness), and N is the population size, assumed to be constant in this basic representation. In the standard SIR model, the initial conditions for the population consist of one infected individual and no recovereds. The SIR model is also referred to as the homogeneous mixing model because of three implicit assumptions in the formulation: 1) the population is fully mixed, meaning that individuals with whom a susceptible individual has contacts are chosen at random from the whole population, 2) all individuals have approximately the same number of contacts in the same period of time, and 3) all contacts transmit the disease with the same probability. All infected individuals are assumed to transmit the disease to the same number of people, and the all susceptible people have the same chance of becoming infected. A number called the basic reproduction number R0, which is the initial value of dI/dt, is often used to indi- cate the initial severity of an epidemic. Equation system (1) represents the number of susceptibles that become infected in the time interval ∆ t. Let ∆ S be the number of susceptibles becoming infected in ∆ t. Then, ∆ S = (number of susceptibles, S) × Pr[Susceptible becomes infected in ∆ t], where Pr[Susceptible becomes infected] = Pr[Susceptible contacts an infected] × Pr[infection is transmitted from an Infected to a Susceptible upon contact], where Pr[Susceptible contacts an infected] = (Number of contacts per individual) × Pr[A contacted individual is infected], where Pr[A contacted individual is infected] = I / (Number of individuals in the population, N). Therefore, ∆ S = (Number of susceptibles, S) × (Number of contacts per individual) × I / (Number of individuals in population, N) × Pr[infection is transmitted from an infected individual to a susceptible individual upon contact] ∆ S = S × (number of contacts per individual) × I / N × Pr[infection is transmitted from an infected individual to a susceptible individual upon contact] ∆ S = (Number of contacts per indi- vidual) Pr[infection is transmitted from an infected individual to a susceptible individual upon contact] × S I / N. Hence, as noted by Sterman (2000), β in (1) is a composite of two factors, the number of contacts per individual, β c , and the probability that the infection is transmitted from an infected individual to a susceptible individual upon contact, β I , as in: β = β β (2) Whereas, in (2) the composite of c and i appears in the standard SIR model (1), c and i are treated separately in the agent-based SIR model, Model 2, as described below. Note, in this derivation the number of contacts per individual is assumed to be a constant for all disease states. That is, an infected individual has as many contacts with others as does a susceptible individual. For a constant population size, N = S + I + R. The output of a typical solution of the SIR model in (1) is shown in Figure 1. The three population states (numbers of susceptible, infected, and recovered individuals) are shown as they vary over time. The output shows an epidemic, as the entire population of agents becomes infected and the number of susceptible individuals declines to zero over the course of the simulation. Note, the smooth nature of the curves due to the deterministic nature of the model and the mean- field characterizations of agent interactions. Key statistics that one might be interested in from such a simulation are the peak number of infected individuals and the time at which the peak occurs. For this simulation run with a population size of 1000, the peak number of infected individuals is 593 occurs at time ...

## Similar publications

Conference Paper
Full-text available
In immune system simulation there are two competing simulation approaches: System Dynamics Simulation (SDS) and Agent-Based Simulation (ABS). In the literature there is little guidance on how to choose the best approach for a specific immune problem. Our overall research aim is to develop a framework that helps researchers with this choice. In this...
Conference Paper
Full-text available
Some common systems modelling and simulation approaches for immune problems are Monte Carlo simulations, system dynamics, discrete-event simulation and agent-based simulation. These methods, however, are still not widely adopted in immunology research. In addition, to our knowledge, there is few research on the processes for the development of simu...
Conference Paper
Full-text available
The recently introduced concept of Shared Autonomous Vehicle (SAV) system, a taxi system without drivers or a short-term rental car-sharing program with autonomous vehicles, presents great potential to promote ridesharing travel behavior. Given the reliability and flexibility provided by the SAV system, some hurdles in the current ridesharing progr...
Conference Paper
Full-text available
Agent-based Modeling is playing a key role in an increasing number of approaches addressing modeling complex systems. Historically, such models were focused on describing the system modelled dynamics but not the interaction or visualization of the model itself. The new requirements for high-level realistic visualization and online analysis tools of...
Article
Full-text available
Simulation models are becoming increasingly popular in the analysis of important policy issues including global warming, population dynamics, energy systems, and urban planning. The usefulness of these models is predicated on their ability to link observable patterns of behaviour of a system to micro-level structures. This paper argues that structu...

## Citations

... Our analysis was predicated on a relatively small population (10,000 individuals) mainly due to the lack of computational resources to compute the optimal policy model and a large number of simulations [76]. Thus, extrapolating our results to larger population sizes can be tricky because we would need to capture information about many other dynamics that take place in larger populations. ...
Article
Mankind has struggled with pathogens throughout history. In this context, the contribution of vaccines to the continued economic and social prosperity of humanity is enormous, but it is constantly threatened by the development of vaccine-resistant strains of the pathogen. In this study, we investigate the usage of genomic sequencing tests to detect new strains of a pathogen in a multi-strain pandemic scenario using a mathematical-epidemiological-genomic-economic model. Our model provides a theoretical framework to explore the influence of an extensive number of pharmaceutical interventions in a dynamic multi-strain pandemic. Specifically, we show that while a genomic sequence testing policy can be both economically and epidemiologically efficient, a random sample of the population provides sub-optimal results. Moreover, we demonstrate that the optimal policy is sensitive to the social and economic settings of the population, and provide a machine learning based model that offers a solution to these challenges.
... ABM, in contrast, allows a description of each individual and to explore the dynamic interactions among agents and environment, reflecting emergent behaviours (Nianogo & Arah, 2015). Furthermore, the literature abounds with examples of systems that can be modelled with either SD or ABM, taking advantage of the capabilities of each approach (Ahmed et al., 2012;Cimler et al., 2018;Macal, 2010). ...
Article
Full-text available
Digitisation has become an essential part of archival and library strategies to enhance access to collections. As the digital content is increasing due to large-scale digitisation projects, it is expected that providing digital access to the analogue collections will eventually reduce the number of archival records accessed in the reading room. In this paper, we investigate this issue using two approaches: system dynamics and agent-based modelling. We first analyse real data in order to identify the dynamic hypothesis of the model. Then, a sensitivity analysis is conducted on two baseline models to identify scenarios that match the real dataset. Although the two approaches suceed to simulate the number of requests in the reading room, the experimental results show that a better fit is obtained in the agent-based model when not only the number of records that have been accessed and digitised is taken into account, but also the number of times that such records have been accessed before digitisation. The proposed model can be used to explore the impact of different digitisation strategies on the decrease in access requests in the archival and library reading rooms.
... ABS, unlike the SD and DES, is a context-dependent modeling technique and unlike the top-down approach as used in SD, it uses a bottom-up approach in modeling such that the behavior of the whole system is determined by the interaction between the agents [73]. Although the ABS unlike SD is stochastic but according to Agency Theorem for System Dynamics, SD models are basically subsets of ABS models [80]. This implies that all the SD models can be modeled as ABS models but that comes at a greater cost as ABS models are complex and time-consuming. ...
Article
Full-text available
At the current worrisome rate of global consumption, the linear economy model of producing goods, using them, and then disposing of them with no thought of the environmental, social, or economic consequences, is unsustainable and points to a deeply flawed manufacturing framework. Circular economy (CE) is presented as an alternative framework to address the management of emissions, scarcity of resources, and economic sustainability such that the resources are kept ‘in the loop’. In the context of manufacturing supply chains (SCs), the 6R’s of rethink, refuse, reduce, reuse, repair, and recycle have been proposed in line with the achievement of targeted net-zero emissions. In order to bring that about, the required changes in the framework for assessing the state of manufacturing SCs with regard to sustainability are indispensable. Verifiable and empirical model-based approaches such as modeling and simulation (M&S) techniques find pronounced use in realizing the ideal of CE. The simulation models find extensive use across various aspects of SCs, including analysis of the impacts, and support for optimal re-design and operation. Using the PRISMA framework to sift through published research, as gathered from SCOPUS, this review is based on 202 research papers spanning from 2015 to the present. This review provides an overview of the simulation tools being put to use in the context of sustainability in the manufacturing SCs, such that various aspects and contours of the collected research articles spanning from 2015 to the present, are highlighted. This article focuses on the three major simulation techniques in the literature, namely, Discrete Event Simulation (DES), Agent-Based Simulation (ABS), and System Dynamics (SD). With regards to their application in manufacturing SCs, each modeling technique has its pros and its cons which are evinced in case of data requirement, model magnification, model resolution, and environment interaction, among others. These limitations are remedied through use of hybrids wherein two or more than two modeling techniques are applied for the desired results. The article also indicates various open-source software solutions that are being employed in research and the industry. This article, in essence, has three objectives. First to present to the prospective researchers, the current state of research, the concerns that have been presented in the field of sustainability modeling, and how they have been resolved. Secondly, it serves as a comprehensive bibliography of peer-reviewed research published from 2015–2022 and, finally, indicating the limitations of the techniques with regards to sustainability assessment. The article also indicates the necessity of a new M&S framework and its prerequisites.
... The proposed model is solved using the agent-based simulation approach [46,47] such that each cell is treated as an agent that is described by a finite-state machine [48]. Namely, each cell, c, is described by its type (τ ∈ {E, C u , C i , H u , H i }) and location in the bladder's geometryx. ...
Article
Full-text available
Bladder cancer is one of the most widespread types of cancer. Multiple treatments for non-invasive, superficial bladder cancer have been proposed over the last several decades with a weekly Bacillus Calmette–Guérin immunotherapy-based therapy protocol, which is considered the gold standard today. Nonetheless, due to the complexity of the interactions between the immune system, healthy cells, and cancer cells in the bladder’s microenvironment, clinical outcomes vary significantly among patients. Mathematical models are shown to be effective in predicting the treatment outcome based on the patient’s clinical condition at the beginning of the treatment. Even so, these models still have large errors for long-term treatments and patients that they do not fit. In this work, we utilize modern mathematical tools and propose a novel cell-level spatio-temporal mathematical model that takes into consideration the cell–cell and cell–environment interactions occurring in a realistic bladder’s geometric configuration in order to reduce these errors. We implement the model using the agent-based simulation approach, showing the impacts of different cancer tumor sizes and locations at the beginning of the treatment on the clinical outcomes for today’s gold-standard treatment protocol. In addition, we propose a genetic-algorithm-based approach to finding a successful and time-optimal treatment protocol for a given patient’s initial condition. Our results show that the current standard treatment protocol can be modified to produce cancer-free equilibrium for deeper cancer cells in the urothelium if the cancer cells’ spatial distribution is known, resulting in a greater success rate.
... Agent-based modeling (ABM) is used to examine the system at the individual level and investigate the movements of agents with distinct behavior and characteristics in different groups. Modeling agents individually allows for capturing the diversity among agents with different attributes and practices and investigating the dynamic behavior of the system [37]. An ABM model typically consists of three parts: ...
... Although SD follows a top-down approach while ABM follows a bottom-up approach when modeling a system, both methods can be used equivalently to model a system. Epidemiological models that are formulated as systems of differential equations are examples of systems that can go through a transformation process to derive equivalent implementations in both approaches [37]. ...
... The use of the framework for the description of our model helped to improve the understanding of the simulation model and allowed for reproducibility and ease of scalability/reuse of our implementation. The conversion from the SD to the ABM model followed the guidelines provided by Macal [37], who provided a formal specification for SD and ABM models and used it to derive equivalent ABM models from SD ones. ...
Article
Full-text available
Hepatitis C is a viral infection (HCV) that causes liver inflammation, and it was found that it affects over 170 million people around the world, with Egypt having the highest rate in the world. Unfortunately, serial liver biopsies, which can be invasive, expensive, risky, and inconvenient to patients, are typically used for the diagnosis of liver fibrosis progression. This study presents the development, validation, and evaluation of a prediction mathematical model for non-invasive diagnosis of liver fibrosis in chronic HCV. The proposed model in this article uses a set of nonlinear ordinary differential equations as its core and divides the population into six groups: Susceptible, Treatment, Responder, Non-Responder, Cured, and Fibrosis. The validation approach involved the implementation of two equivalent simulation models that examine the proposed process from different perspectives. A system dynamics model was developed to understand the nonlinear behavior of the diagnosis process over time. The system dynamics model was then transformed to an equivalent agent-based model to examine the system at the individual level. The numerical analysis and simulation results indicate that the earlier the HCV treatment is implemented, the larger the group of people who will become responders, and less people will develop complications such as fibrosis.
... Системная динамика дает возможность понимать и изучать все взаимные связи внутри систем, а также для определения характера изменений в этих системах с течением времени. Агент-ориентированный метод моделирует структуру системы как результат децентрализованных решений отдельных субъектов или агентов с течением времени (Macal, 2010). Следовательно, сами агенты формируют и изменяют состояние и структуру системы. ...
Article
The article discusses the evolution of two approaches to using computer simulations in economic research. The first was created by the American mathematician Norbert Wiener, who used the theory of cybernetics as a method of scientific research and cognition. The second, which also began its journey in the period 1944–1955, is based on the writings of John von Neumann, then a consultant to Los Alamos National Laboratory, and his invention of cellular automata. The first section provides an overview of the Wiener path, which became the guiding reference point for the Forrester system dynamics and Orcutt microsimulations. The second section presents the Neumann path, which gave birth to such methods as Monte Carlo and agent-based modeling, which, as a result, through the efforts of G. Epstein and P. Axtel, was transformed into a conceptual approach of artificial life specifically designed for socio-economic experiments. Finally, the third section explores the potential for developing hybrid simulations by combining the methods of the two paths in concern.
... They allow to model individual behavior and interactions as the autonomous driving force of the simulation. This is in contrast to discrete-event models and system dynamics with their bottom-up and top-down perspectives, respectively (Macal 2010). Multi-method simulation mixes these approaches when different types of granularity are required (Brailsford et al. 2010). ...
Article
Full-text available
Whenever a system needs to be operated by a central decision making authority in the presence of two or more conflicting goals, methods from multi-criteria decision making can help to resolve the trade-offs between these goals. In this work, we devise an interactive simulation-based methodology for planning and deciding in complex dynamic systems subject to multiple objectives and parameter uncertainty. The outline intermittently employs simulation models and global sensitivity analysis methods in order to facilitate the acquisition of system-related knowledge throughout the iterations. Moreover, the decision maker participates in the decision making process by interactively adjusting control variables and system parameters according to a guiding analysis question posed for each iteration. As a result, the overall decision making process is backed up by sensitivity analysis results providing increased confidence in terms of reliability of considered decision alternatives. Using the efficiency concept of Pareto optimality and the sensitivity analysis method of Sobol’ sensitivity indices, the methodology is then instantiated in a case study on planning and deciding in an infectious disease epidemic situation similar to the 2020 coronavirus pandemic. Results show that the presented simulation-based methodology is capable of successfully addressing issues such as system dynamics, parameter uncertainty, and multi-criteria decision making. Hence, it represents a viable tool for supporting decision makers in situations characterized by time dynamics, uncertainty, and multiple objectives.
... The final model follows Macal's agent-based formulation [44] of the well-known Susceptible-Infected-Recovered model [37], which imitates the spread of an infection. We extend this model by random movement on a social graph. ...
Preprint
Simulation-based optimization using agent-based models is typically carried out under the assumption that the gradient describing the sensitivity of the simulation output to the input cannot be evaluated directly. To still apply gradient-based optimization methods, which efficiently steer the optimization towards a local optimum, gradient estimation methods can be employed. However, many simulation runs are needed to obtain accurate estimates if the input dimension is large. Automatic differentiation (AD) is a family of techniques to compute gradients of general programs directly. Here, we explore the use of AD in the context of time-driven agent-based simulations. By substituting common discrete model elements such as conditional branching with smooth approximations, we obtain gradient information across discontinuities in the model logic. On the example of microscopic traffic models and an epidemics model, we study the fidelity and overhead of the differentiable models, as well as the convergence speed and solution quality achieved by gradient-based optimization compared to gradient-free methods. In traffic signal timing optimization problems with high input dimension, the gradient-based methods exhibit substantially superior performance. Finally, we demonstrate that the approach enables gradient-based training of neural network-controlled simulation entities embedded in the model logic.
... Concerning SD modelling, ABS and SD are very similar techniques, whereby ABS takes bottom-up and SD a top-down approach. In fact, "the set of all SD models is a strict subset of the set all ABS models" [52], which is also referred to as the Agency Theorem for System Dynamics [53]. However, due to the given system complexity and the number of involved entities as well as unknown parameters, the bottom-up approach of ABS was likely to support the overall modelling process and, therefore, was more preferential for this study than SD. ...
... (Law, 2009) This aims to ensure qualitative and quantitative ( Eddy et al., 2012 ) consistency between the real world and a simulation. A hybrid RAM can strengthen the integration process between the resource and agent modelling approaches, and confidence building amongst modellers and users ( Howick et al., 2008 ;Macal, 2010 ), by applying it as a joint conceptual modelling procedure. This ensures that the qualitative modelling stage is theoretically and methodologically consistent with a quantitative simulation modelling phase. ...
Article
Complex adaptive systems are systems where those managing the system, the agents, interact with other competing agents and key resources available to the system. The behaviour of the agents and the resources are constantly changing over time thus resulting in complex systems of evolving problem configurations. Managing such a system can be very challenging, particularly when attempting to manage rather than simplify complexity. One particular problem is the need to take a comprehensive perspective of the complex system in order to manage it effectively. Resource structure and agent behaviour are interdependent and both interconnected components need to be considered in order to support optimal decision making. Due to the lack of an appropriate technique in the literature to achieve a comprehensive qualitative appreciation of resource/agent complex adaptive system behaviour, this paper describes the development of a novel qualitative modelling tool, a Resource/Agent Map, that aims to map and analyse both resources and agents interactive behaviour. We show how this modelling tool can help achieve a holistic appreciation of the resource/agent perspectives and generate scenario alternatives to inform policy decision making in respect to system management and regulation. A pharmaceutical example is used to demonstrate the modelling tool.