Article

Epidemics and Rumours in Complex Networks

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Information propagation through peer-to-peer systems, online social systems, wireless mobile ad hoc networks and other modern structures can be modelled as an epidemic on a network of contacts. Understanding how epidemic processes interact with network topology allows us to predict ultimate course, understand phase transitions and develop strategies to control and optimise dissemination. This book is a concise introduction for applied mathematicians and computer scientists to basic models, analytical tools and mathematical and algorithmic results. Mathematical tools introduced include coupling methods, Poisson approximation (the Stein-Chen method), concentration inequalities (Chernoff bounds and Azuma-Hoeffding inequality) and branching processes. The authors examine the small-world phenomenon, preferential attachment, as well as classical epidemics. Each chapter ends with pointers to the wider literature. An ideal accompaniment for graduate courses, this book is also for researchers (statistical physicists, biologists, social scientists) who need an efficient guide to modern approaches to epidemic modelling on networks.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... By invoking the Cauchy-Schwarz inequality and considering that the matrix is symmetric, we obtain (see 30 ...
... Proof. Following the proof of 30 Cor. 8.6 we have ...
... The result in Proposition 1 implies that the condition (9) is sufficient for fast extinction. This coincides with what is known for the SIS model in 30 Thm 8.2 where the bound is over the probability that at time the process has not yet reached the absorbing state (the overall-healthy state). ...
Preprint
We study the spread of an SIRS-type epidemic with vaccination on network. Starting from an exact Markov description of the model, we investigate the mean epidemic lifetime by providing a sufficient condition for fast extinction that depends on the model parameters and the topology of the network. Then, we pass to consider a firstorder mean-field approximation of the exact model and its stability properties, by relying on the graph-theoretical notion of equitable partition. In the case of graphs possessing this kind of partition, we prove that the endemic equilibrium can be computed by using a lower-dimensional dynamical system. Finally, in the special case of regular graphs, we investigate the domain of attraction of the endemic equilibrium.
... Mathematical models of epidemic spreading started to be proposed and studied since the beginning of the industrial revolution, with one of the earliest attempts to model infectious disease transmission mathematically by Daniel Bernoulli [17]. Over the last 200 years, many mathematical models of epidemics have been proposed and analyzed [11,29,101,105,134]. Compartment Models The basic models of many mathematical modeling of epidemics are the well-known compartment models [71]. ...
... Their popularity is because deterministic models can become more complex yet still feasible to analyze, at least when numerical results are sufficient. In contrast, stochastic models, usually represented by Markov processes, need to be fairly simple to be mathematically manageable [11,29]. ...
... The proof of the convergence is underpinned by the well-known Kurtz's theorem. One can refer to Section 5.3 of [29] for more details. ...
Article
Full-text available
This review presents and reviews various solved and open problems in developing, analyzing, and mitigating epidemic spreading processes under human decision-making. We provide a review of a range of epidemic models and explain the pros and cons of different epidemic models. We exhibit the art of coupling between epidemic models and decision models in the existing literature. More specifically, we provide answers to fundamental questions in human decision-making amid epidemics, including what interventions to take to combat the disease, who are decision-makers, and when and how to take interventions, and how to make interventions. Among many decision models, game-theoretic models have become increasingly crucial in modeling human responses or behavior amid epidemics in the last decade. In this review, we motivate the game-theoretic approach to human decision-making amid epidemics. This review provides an overview of the existing literature by developing a multi-dimensional taxonomy, which categorizes existing literature based on multiple dimensions, including (1) types of games, such as differential games, stochastic games, evolutionary games, and static games; (2) types of interventions, such as social distancing, vaccination, quarantine, and taking antidotes; (3) the types of decision-makers, such as individuals, adversaries, and central authorities at different hierarchical levels. A fine-grained dynamic game framework is proposed to capture the essence of game-theoretic decision-making amid epidemics. We showcase three representative frameworks with unique ways of integrating game-theoretic decision-making into the epidemic models from a vast body of literature. Each of the three frameworks has their unique way of modeling and analyzing and develops results from different angles. In the end, we identify several main open problems and research gaps left to be addressed and filled.
... The Central Limit Theorem (27) can be retrieved by using Theorem 3.4(i). Indeed, as we shall check later on (see the comment after the statement of Lemma 7.1), the conditions (11), (9) and (13) guarantee ...
... Lemma 7.2 Assume (12), (9) and (14). Then {S t } t>0 converges mod-φ λ,Z on D cp := D cp (a) with speed O(t −σ ) and limiting function (63). ...
... where the latter quantity is finite by (9). So (13) follows by (71), (14) and Minkowski's inequality. ...
Preprint
Poisson shot noise processes are natural generalizations of compound Poisson processes that have been widely applied in insurance, neuroscience, seismology, computer science and epidemiology. In this paper we study sharp deviations, fluctuations and the stable probability approximation of Poisson shot noise processes. Our achievements extend, improve and complement existing results in the literature. We apply the theoretical results to Poisson cluster point processes, including generalized linear Hawkes processes, and risk processes with delayed claims. Many examples are discussed in detail.
... Mathematical models of epidemic spreading started to be proposed and studied since the beginning of the industrial revolution, with one of the earliest attempts to model infectious disease transmission mathematically by Daniel Bernoulli [12]. Over the last 200 years, many mathematical models of epidemics have been proposed and analyzed [6,24,78,82,105]. ...
... Their popularity is because deterministic models can become more complex yet still feasible to analyze, at least when numerical results are sufficient. In contrast, stochastic models, usually represented by Markov processes, need to be fairly simple to be mathematically manageable [6,24]. ...
... It is easy to see that the stochastic population model (6) The proof of the convergence is underpinned by the well-known Kurtz's theorem. One can refer to Section 5.3 of [24] for more details. ...
Preprint
Full-text available
This review presents and reviews various solved and open problems in developing, analyzing, and mitigating epidemic spreading processes under human decision-making. We provide a review of a range of epidemic models and explain the pros and cons of different epidemic models. We exhibit the art of coupling epidemic models and decision models in the existing literature. More specifically, fundamental questions in human decision-making amid epidemics such as what interventions are taken to combat the disease, who are decision-makers, when interventions are taken, and how interventions are modeled. Among many decision models, game-theoretic models have become increasingly crucial in modeling human responses/behavior amid epidemics in the last decade. In this review, we motivate the game-theoretic approach to human decision-making amid epidemics. This review provides an overview of the existing literature by developing a multi-dimensional taxonomy, which categorizes existing works based on multiple dimensions, including 1) types of games, such as differential games, stochastic games, evolutionary games, and static games; 2) types of interventions, such as social distancing, vaccination, quarantine, taking antidotes, etc.; 3) the types of decision-makers, such as individuals, adversaries, and central authorities at different hierarchical levels. A fine-grained dynamic game framework is proposed to capture the essence of game-theoretic decision-making amid epidemics. From a vast body of works, we showcase three representative works with unique ways of integrating game-theoretic decision-making into the epidemic models. The uniqueness of each of these three works distinguishes themselves from each other regarding their models, analytical approaches, and results. In the end, we identify several main open problems and research gaps left to be addressed and filled.
... Consider the term =1 ( ) ( − ) in (10). It is easy to see that when =1 ...
... Such a deterministic system is referred to as fluid model in the literature. In particular, our proof is closely related to the argument for a result called Kurtz's theorem on density-dependent Markov processes (see Chapter 8 of [20] or Chapter 5.3 of [10]). However, we remark that there are two key differences between our proof and the standard argument. ...
... Step 1: Upper bound (16) using ideas similar to those used in proving Kurtz's theorem [10,20]. ...
Preprint
Full-text available
Cloud computing today is dominated by multi-server jobs. These are jobs that request multiple servers simultaneously and hold onto all of these servers for the duration of the job. Multi-server jobs add a lot of complexity to the traditional one-job-per-server model: an arrival might not "fit" into the available servers and might have to queue, blocking later arrivals and leaving servers idle. From a queueing perspective, almost nothing is understood about multi-server job queueing systems; even understanding the exact stability region is a very hard problem. In this paper, we investigate a multi-server job queueing model under scaling regimes where the number of servers in the system grows. Specifically, we consider a system with multiple classes of jobs, where jobs from different classes can request different numbers of servers and have different service time distributions, and jobs are served in first-come-first-served order. The multi-server job model opens up new scaling regimes where both the number of servers that a job needs and the system load scale with the total number of servers. Within these scaling regimes, we derive the first results on stability, queueing probability, and the transient analysis of the number of jobs in the system for each class. In particular we derive sufficient conditions for zero queueing. Our analysis introduces a novel way of extracting information from the Lyapunov drift, which can be applicable to a broader scope of problems in queueing systems.
... The World Health Organization (WHO) reported on December 31, 2019 cases of pneumonia of undetected etiology in the city of Wuhan, Hubei Province in China. A novel coronavirus (CoViD- 19) was identified as the source of the disease by the Chinese authorities on January 7, 2020. Eventually, the International Committee on Taxonomy of Viruses on 11 February, 2020 named the Severe Acute Respiratory Syndrome Coronavirus as SARS-CoV-2 [1] . ...
... Our study presents for the first time a new stochastic mathematical model for describing infectious dynamics and tracking virus temporal transmissibility on 3-dimensional space (earth). This model can be adjusted to describe all past outbreaks as well as CoViD- 19. As a matter of fact, it introduces a novel approach to mathematical modelling of infectious dynamics of any disease, and sets a starting point for conducting simulations, forecasting and nowcasting investigations based on real-world stereographic and spherical tracking on earth. ...
... This bimodal feature is caused by two likely scenarios; either the epidemic dies out quickly infecting few individuals, or it becomes long-lasting and substantial. However, stochasticity in the form of random walk transmission mechanisms related to spreading processes has never been explored in epidemiology widely [18][19][20] . For example, in computer science, some artificially created viruses propagate randomly by a plethora of online communication channels. ...
Article
A worldwide multi-scale interplay among a plethora of factors, ranging from micro-pathogens and individual or population interactions to macro-scale environmental, socio-economic and demographic conditions, entails the development of highly sophisticated mathematical models for robust representation of the contagious disease dynamics that would lead to the improvement of current outbreak control strategies and vaccination and prevention policies. Due to the complexity of the underlying interactions, both deterministic and stochastic epidemiological models are built upon incomplete information regarding the infectious network. Hence, rigorous mathematical epidemiology models can be utilized to combat epidemic outbreaks. We introduce a new spatiotemporal approach (SBDiEM) for modeling, forecasting and nowcasting infectious dynamics, particularly in light of recent efforts to establish a global surveillance network for combating pandemics with the use of artificial intelligence. This model can be adjusted to describe past outbreaks as well as COVID-19. Our novel methodology may have important implications for national health systems, international stakeholders and policy makers.
... In this model, nodes have an epidemic state that changes over time according to some function of the epidemic state of their respective neighbors. The most elementary epidemic state is represented by a single binary digit, and thus, every node is found in one of two possible states, frequently denoted by susceptible (S) and infected (I) [1][2][3]. ...
... Intuitively, the network structure plays a key role, as illustrated by the celebrated work of Pastor-Satorras and Vespignani, showing that the epidemic threshold vanishes on networks where the degree distribution is heavy enough [4]. Indeed, the role of network structure on specific epidemic models has been broadly investigated and different dichotomies have been identified (e.g., very long versus very short epidemic duration) [1,2,[5][6][7]. Moreover, recent efforts have focused on understanding the impact of dynamic network structure (i.e., edge set changes over time) [8][9][10]. ...
... where τ (t) is the coincidence time up to time t (total time the two agents have spent together, up to time t), and π is the invariant distribution for the random walk on G. Note that graphs with different degree distribution (i.e., regular graph versus power law distribution) will have very different scalings for this quantities [2]. ...
Article
Full-text available
Network epidemics is a ubiquitous model that can represent different phenomena and finds applications in various domains. Among its various characteristics, a fundamental question concerns the time when an epidemic stops propagating. We investigate this characteristic on a SIS epidemic induced by agents that move according to independent continuous time random walks on a finite graph: agents can either be infected (I) or susceptible (S), and infection occurs when two agents with different epidemic states meet in a node. After a random recovery time, an infected agent returns to state S and can be infected again. The end of epidemic (EoE) denotes the first time where all agents are in state S, since after this moment no further infections can occur and the epidemic stops. For the case of two agents on edge-transitive graphs, we characterize EoE as a function of the network structure by relating the Laplace transform of EoE to the Laplace transform of the meeting time of two random walks. Interestingly, this analysis shows a separation between the effect of network structure and epidemic dynamics. We then study the asymptotic behavior of EoE (asymptotically in the size of the graph) under different parameter scalings, identifying regimes where EoE converges in distribution to a proper random variable or to infinity. We also highlight the impact of different graph structures on EoE, characterizing it under complete graphs, complete bipartite graphs, and rings.
... In this model, nodes have an epidemic state that changes over time according to some function of the epidemic state of their respective neighbors. The most elementary epidemic state is represented by a single binary digit, and thus, every node is found in one of two possible states, frequently denoted by susceptible (S) and infected (I) [4,11,24]. ...
... Intuitively, the network structure plays a key role, as illustrated by the celebrated work of Pastor-Satorras and Vespignani, showing that the epidemic threshold vanishes on networks where the degree distribution is heavy enough [25]. Indeed, the role of network structure on specific epidemic models has been broadly investigated and different dichotomies have been identified (e.g., very long versus very short epidemic duration) [11,13,20,22,24]. Moreover, recent efforts have focused on understanding the impact of dynamic network structure (i.e., edge set changes over time) [19,28,29]. ...
... where τ (t) is the coincidence time up to time t (total time the two agents have spent together, up to time t), and π is the invariant distribution for the random walk on G. Note that graphs with different degree distribution (i.e., regular graph versus power law distribution) will have very different scalings for this quantities [11]. ...
Preprint
Network epidemics is a ubiquitous model that can represent different phenomena and finds applications in various domains. Among its various characteristics, a fundamental question concerns the time when an epidemic stops propagating. We investigate this characteristic on a SIS epidemic induced by agents that move according to independent continuous time random walks on a finite graph: Agents can either be infected (I) or susceptible (S), and infection occurs when two agents with different epidemic states meet in a node. After a random recovery time, an infected agent returns to state S and can be infected again. The End of Epidemic (EoE) denotes the first time where all agents are in state S, since after this moment no further infections can occur and the epidemic stops. For the case of two agents on edge-transitive graphs, we characterize EoE as a function of the network structure by relating the Laplace transform of EoE to the Laplace transform of the meeting time of two random walks. Interestingly, this analysis shows a separation between the effect of network structure and epidemic dynamics. We then study the asymptotic behavior of EoE (asymptotically in the size of the graph) under different parameter scalings, identifying regimes where EoE converges in distribution to a proper random variable or to infinity. We also highlight the impact of different graph structures on EoE, characterizing it under complete graphs, complete bipartite graphs, and rings.
... A be an arbitrary subset of vertices Z 2 \ {0}, and consider T := inf{t : Z t ∩ A is non-empty.}. Under assumptions (11,12), ...
... • FPP with general IID weights on Z d [15,7] • FPP on classical random graph (Erdős-Rényi or configuration) models [8,9] • and a much broader "epidemics and rumors on complex networks" literature [12,19]. ...
Preprint
A simple lemma bounds s.d.(T)/ET\mathrm{s.d.}(T)/\mathbb{E} T for hitting times T in Markov chains with a certain strong monotonicity property. We show how this lemma may be applied to several increasing set-valued processes. Our main result concerns a model of first passage percolation on a finite graph, where the traversal times of edges are independent Exponentials with arbitrary rates. Consider the percolation time X between two arbitrary vertices. We prove that s.d.(X)/EX\mathrm{s.d.}(X)/\mathbb{E} X is small if and only if Ξ/EX\Xi/\mathbb{E} X is small, where Ξ\Xi is the maximal edge-traversal time in the percolation path attaining X.
... However, the process shows a phase transition behavior: indeed, there is a critical value τ c of the effective spreading rate τ = β /δ , whereby if τ is lesser than τ c , the initial infection dies out quickly. Conversely, for τ larger than τ c , the infection spreading can last very long in any sufficiently large network [22,23,24]. The regime of persistent infection (τ > τ c ), called the metastable or quasi-stationary state, is reached rapidly given an initial set of infected nodes and can persist for very long time [24]. ...
... In support of this, numerical simulations of SIS processes reveal that, even for fairly small networks (N ≃ 100) and when τ > τ c , the overall-healthy state is only reached after an unrealistically long time. Hence, the indication of the model is that, in the case of real networks, one should expect that above the epidemic threshold the extinction of epidemics is hardly ever attained [22,23]. For this reason, the literature is mainly concerned with establishing the value of the epidemic threshold, being a key measure of the robustness OPTIMAL CURING POLICY FOR EPIDEMIC SPREADING OVER A COMMUNITY NETWORK 5 of 31 against epidemic spreading. ...
Preprint
The design of an efficient curing policy, able to stem an epidemic process at an affordable cost, has to account for the structure of the population contact network supporting the contagious process. Thus, we tackle the problem of allocating recovery resources among the population, at the lowest cost possible to prevent the epidemic from persisting indefinitely in the network. Specifically, we analyze a susceptible-infected-susceptible epidemic process spreading over a weighted graph, by means of a first-order mean-field approximation. First, we describe the influence of the contact network on the dynamics of the epidemics among a heterogeneous population, that is possibly divided into communities. For the case of a community network, our investigation relies on the graph-theoretical notion of equitable partition; we show that the epidemic threshold, a key measure of the network robustness against epidemic spreading, can be determined using a lower-dimensional dynamical system. Exploiting the computation of the epidemic threshold, we determine a cost-optimal curing policy by solving a convex minimization problem, which possesses a reduced dimension in the case of a community network. Lastly, we consider a two-level optimal curing problem, for which an algorithm is designed with a polynomial time complexity in the network size.
... with D denoting a Poisson rv with parameter λ. A rich asymptotic theory has been developed for Erdős-Rényi graphs in the many node regime; see the monographs [10], [18], [20], [26]. However, in many networks the data tells a different story: If the network comprises a large number n nodes and N n (d) is the number of nodes with degree d in the network, then statistical analysis suggests a power-law behavior of the form N n (d) n Cd −α (2) for some α in the range [2,3] (with occasional exceptions) and C > 0. See [20,Section 4.2] for an introductory discussion and references, and the paper by Clauset et al. [15] for a principled statistical framework. ...
... , n}. With c > 0, 1) Erdős-Rényi graphs G(n; p) (0 ≤ p ≤ 1) with scaling p : N 0 → [0, 1] such that p n ∼ c n [10], [18], [22]; 2) Geometric random graphs G(n; ρ) on a unit square (ρ > 0) with scaling ρ : N 0 → R + such that πρ 2 n ∼ c n [37]; and 3) Random key graphs K(n; K, P ) (K < P in N 0 ) with scalings K, P : N 0 → N 0 such that ...
Preprint
In random graph models, the degree distribution of an individual node should be distinguished from the (empirical) degree distribution of the graph that records the fractions of nodes with given degree. We introduce a general framework to explore when these two degree distributions coincide asymptotically in large homogeneous random networks. The discussion is carried under three basic statistical assumptions on the degree sequences: (i) a weak form of distributional homogeneity; (ii) the existence of an asymptotic (nodal) degree distribution; and (iii) a weak form of asymptotic uncorrelatedness. We show that this asymptotic equality may fail in homogeneous random networks for which (i) and (ii) hold but (iii) does not. The counterexample is found in the class of random threshold graphs. An implication of this finding is that random threshold graphs cannot be used as a substitute to the Barab\'asi-Albert model for scale-free network modeling, as has been proposed by some authors. The results can also be formulated for non-homogeneous models by making use of a random sampling procedure over the nodes.
... Network epidemics is a ubiquitous model to capture different diffusion processes on networks such as the spread of a disease in a population, viruses in computer networks, and fake news in online social networks [4,16,17,19,20]. Within this context, an important and general problem is determining the epidemic source: identifying the node or nodes that were first infected, that originated the epidemic, from a partial observation of the epidemic process [1,2,6,10,13,21,[24][25][26]. ...
... Figure 5 shows a comparison between the accuracy of the epicenter (given by f 1 ) and the distance center in identifying the epidemic source (computed on the exact same spanning trees generated for each run). 4 Note that for all random graph models, the accuracy of the epicenter is significantly higher than that of the distance center. The relative improvement depends on the graph model and average degree: for ER the epicenter is around 26% more accurate than the distance center, but for BA this superiority is around 226% (3.26 times more accurate), for d = 6. ...
Article
Full-text available
Epidemic source detection is the problem of identifying the network node that originated an epidemic from a partial observation of the epidemic process. The problem finds applications in different contexts, such as detecting the origin of rumors in online social media, and has been studied under various assumptions. Different from prior studies, this work considers an epidemic process on a finite graph that starts on a random node (epidemic source) and terminates when all nodes are infected, yielding a rooted and directed epidemic tree that encodes node infections. Assuming knowledge of the underlying graph and the undirected spanning tree, can the epidemic source be accurately identified? This work tackles this problem by introducing the epicenter , an efficient estimator for the epidemic source. When the underlying graph is vertex-transitive the epicenter can be computed in linear time and it coincides with the well-known distance center of the epidemic tree. Moreover, on a complete graph the epicenter is also the most likely estimator for the source. Finally, the accuracy of the epicenter is evaluated numerically on five different graph models and the performance strongly depends on the graph structure, varying from 31% (complete graphs) to 13% (sparse power law graphs).
... Vaccination coverage for epidemic spreading inhibition is studied for SIR model in [6]. It is to be noted that for these classical models, basic reproduction number (R 0 ), which is the expected number of secondary infections produced by a single infected person during the entire period of infectiousness in a susceptible population [7], is given by the ratio of the infection rate (β) to the curing rate (δ). It is basically a threshold parameter for these models, with R 0 < 1 signifying that the disease dies out gradually, while R 0 > 1 implies that the spreading will become epidemic. ...
... where ζ = diag{ζ i } and ∆ = diag{δ i }, i ∈ N . Then, considering (6), (7), and (5), and letting p(t) = (p E (t)) T (p I (t)) T (p S (t)) T T , one obtainṡ ...
Article
In this brief, dynamical aspects of a coupled epidemic-opinion dynamical model are studied, in that the disease spreading network model namely Susceptible-Exposed-Infected-Vigilant (SEIV) compartmental model is employed over a physical transmission network, while the opinion evolution model is employed over a social network. In particular, the opinion evolution captures how severe the disease is perceived to be for communities. Through rigorous analysis, it has been shown that if the opinion-dependent basic reproduction rate (R0) is less than unity, then the epidemic reaches disease free state, while if it is greater than unity, the disease free state is unstable. Also, sufficient condition for endemic state of the disease, where the disease persists in the network is obtained. The theoretical findings are corroborated by the simulation results.
... By invoking the Cauchy-Schwarz inequality and considering that the matrix is symmetric, we obtain (see 30 ...
... The result in Proposition 1 implies that the condition (9) is sufficient for fast extinction. This coincides with what is known for the SIS model in 30,Thm 8.2 where the bound is over the probability that at time the process has not yet reached the absorbing state (the overall-healthy state). ...
Article
Full-text available
We study the spread of an SIRS-type epidemic with vaccination on network. Starting from an exact Markov description of the model, we investigate the mean epidemic lifetime by providing a sufficient condition for fast extinction that depends on the model parameters and the topology of the network. Then, we pass to consider a firstorder mean-field approximation of the exact model and its stability properties, by relying on the graph-theoretical notion of equitable partition. In the case of graphs possessing this kind of partition, we prove that the endemic equilibrium can be computed by using a lower-dimensional dynamical system. Finally, in the special case of regular graphs, we investigate the domain of attraction of the endemic equilibrium.
... The model (1) is computationally challenging under largescale networks due to the exponentially increasing state space. Hence, we resort to mean-field approximation of the Markov process [17], [21], [22]. Denote x i (t) ∈ [0, 1] as the probability of agent i being infected at time t. ...
... Definition 2. A differential game with costĴ i = T 0l i (t, x, u)dt and dynamics defined by (2) is an potential differential game if there exists a function π : X × U 1 × · · · × U N × [0, T ] → R that satisfies the following condition for every If we can find c i (t) for every i ∈ N such that relation (21) ...
Preprint
Increasing connectivity of communication networks enables large-scale distributed processing over networks and improves the efficiency for information exchange. However, malware and virus can take advantage of the high connectivity to spread over the network and take control of devices and servers for illicit purposes. In this paper, we use an SIS epidemic model to capture the virus spreading process and develop a virus-resistant weight adaptation scheme to mitigate the spreading over the network. We propose a differential game framework to provide a theoretic underpinning for decentralized mitigation in which nodes of the network cannot fully coordinate, and each node determines its own control policy based on local interactions with neighboring nodes. We characterize and examine the structure of the Nash equilibrium, and discuss the inefficiency of the Nash equilibrium in terms of minimizing the total cost of the whole network. A mechanism design through a penalty scheme is proposed to reduce the inefficiency of the Nash equilibrium and allow the decentralized policy to achieve social welfare for the whole network. We corroborate our results using numerical experiments and show that virus-resistance can be achieved by a distributed weight adaptation scheme.
... More recently, the bootstrap percolation process has been investigated in the context of random graphs. This is partly motivated by the increasing interest in dynamical processes taking place over large-scale complex systems such as technological, biological and social networks whose irregular structure is better captured by random graphs models (see [20] for a comprehensive introduction to epidemics in complex networks). For example, in the case of social networks, bootstrap percolation may serve as a primitive model for the spread of ideas, rumors and trends among individuals. ...
... ). In the following, we (20). So by (36) and (116) log P (n − a n − S n (n − h(n)) ≥ ⌈εf 3 (n)⌉) = log P (Bin(n − a n , 1 − π n (n − h(n))) ≥ ⌈εf 3 (n)⌉) ...
Article
Full-text available
We consider the Erd\"{o}s--R\'{e}nyi random graph Gn,pG_{n,p} and we analyze the simple irreversible epidemic process on the graph, known in the literature as bootstrap percolation. We give a quantitative version of some results by Janson et al. (2012), providing a fine asymptotic analysis of the final size AnA_n^* of active nodes, under a suitable super-critical regime. More specifically, we establish large deviation principles for the sequence of random variables {nAnf(n)}n1\{\frac{n- A_n^*}{f(n)}\}_{n\geq 1} with explicit rate functions and allowing the scaling function f to vary in the widest possible range.
... In computation theory, mathematics, physics, complexity theory, theoretical biology and microstructure modeling, the model is known as a two-dimensional, two-state Asynchronous Cellular Automaton (ACA) with extended Moore neighborhoods and exponential waiting times [5]. Other related models appeared in epidemiology [6], [7], economics [8], engineering and computer sciences [9], [10]. Mathematically, all of these models fall in the general area of interacting particle systems, or contact processes, and exhibit phase transitions [11], [12]. ...
Preprint
We consider an agent-based model in which two types of agents interact locally over a graph and have a common intolerance threshold τ\tau for changing their types with exponentially distributed waiting times. The model is equivalent to an unperturbed Schelling model of self-organized segregation, an Asynchronous Cellular Automata (ACA) with extended Moore neighborhoods, or a zero-temperature Ising model with Glauber dynamics, and has applications in the analysis of social and biological networks, and spin glasses systems. Some rigorous results were recently obtained in the theoretical computer science literature, and this work provides several extensions. We enlarge the intolerance interval leading to the formation of large segregated regions of agents of a single type from the known size ϵ>0\epsilon>0 to size 0.134\approx 0.134. Namely, we show that for 0.433<τ<1/20.433 < \tau < 1/2 (and by symmetry 1/2<τ<0.5671/2<\tau<0.567), the expected size of the largest segregated region containing an arbitrary agent is exponential in the size of the neighborhood. We further extend the interval leading to large segregated regions to size 0.312\approx 0.312 considering "almost segregated" regions, namely regions where the ratio of the number of agents of one type and the number of agents of the other type vanishes quickly as the size of the neighborhood grows. In this case, we show that for 0.344<τ0.4330.344 < \tau \leq 0.433 (and by symmetry for 0.567τ<0.6560.567 \leq \tau<0.656) the expected size of the largest almost segregated region containing an arbitrary agent is exponential in the size of the neighborhood. The exponential bounds that we provide also imply that complete segregation, where agents of a single type cover the whole grid, does not occur with high probability for p=1/2 and the range of tolerance considered.
... The relevance of the effective resistance in graph theory stems from the fact that it provides a distance on a graph [26] that quantifies the connectivity between any two vertices, not simply the length of the shortest path. In problems related to diffusion on a graph, or propagation of infections or gossips [16,25,37], the redundancy of paths affects the dynamics of the corresponding processes. Formally, the effective resistance provides the correct notion of distance for a random walk on a graph, also known as the commute time. ...
Preprint
To quantify the fundamental evolution of time-varying networks, and detect abnormal behavior, one needs a notion of temporal difference that captures significant organizational changes between two successive instants. In this work, we propose a family of distances that can be tuned to quantify structural changes occurring on a graph at different scales: from the local scale formed by the neighbors of each vertex, to the largest scale that quantifies the connections between clusters, or communities. Our approach results in the definition of a true distance, and not merely a notion of similarity. We propose fast (linear in the number of edges) randomized algorithms that can quickly compute an approximation to the graph metric. The third contribution involves a fast algorithm to increase the robustness of a network by optimally decreasing the Kirchhoff index. Finally, we conduct several experiments on synthetic graphs and real networks, and we demonstrate that we can detect configurational changes that are directly related to the hidden variables governing the evolution of dynamic networks.
... Three approaches are commonly adopted in the analysis of propagation models. The first is to directly characterize the stochastic process adopting tools from branching processes, percolation, and random graph theory; e.g., see [7]. The second approach proposes a degree-based model based on the approximation that nodes with the same degree exhibit similar behavior; e.g., see [17,Chapter 17]. ...
Preprint
In this work we review a class of deterministic nonlinear models for the propagation of infectious diseases over contact networks with strongly-connected topologies. We consider network models for susceptible-infected (SI), susceptible-infected-susceptible (SIS), and susceptible-infected-recovered (SIR) settings. In each setting, we provide a comprehensive nonlinear analysis of equilibria, stability properties, convergence, monotonicity, positivity, and threshold conditions. For the network SI setting, specific contributions include establishing its equilibria, stability, and positivity properties. For the network SIS setting, we review a well-known deterministic model, provide novel results on the computation and characterization of the endemic state (when the system is above the epidemic threshold), and present alternative proofs for some of its properties. Finally, for the network SIR setting, we propose novel results for transient behavior, threshold conditions, stability properties, and asymptotic convergence. These results are analogous to those well-known for the scalar case. In addition, we provide a novel iterative algorithm to compute the asymptotic state of the network SIR system.
... Therefore, the diameter of the heavy core is at most the diameter of an Erdös-Rényi random graph G(n, p), with p =n −1+ω(1/ log log n) . However, with probability 1 −n −ω(1) this diameter is ( logn/ log (pn)) = o( log log n) [23], and in particular the graph is connected in this case. Sincen = n (1) , this proves the lemma. ...
Article
Full-text available
In Chung–Lu random graphs, a classic model for real-world networks, each vertex is equipped with a weight drawn from a power-law distribution, and two vertices form an edge independently with probability proportional to the product of their weights. Chung–Lu graphs have average distance O(loglogn)O(\log\log n) and thus reproduce the small-world phenomenon, a key property of real-world networks. Modern, more realistic variants of this model also equip each vertex with a random position in a specific underlying geometry. The edge probability of two vertices then depends, say, inversely polynomially on their distance. In this paper we study a generic augmented version of Chung–Lu random graphs. We analyze a model where the edge probability of two vertices can depend arbitrarily on their positions, as long as the marginal probability of forming an edge (for two vertices with fixed weights, one fixed position, and one random position) is as in a Chung–Lu random graph. The resulting class contains Chung–Lu random graphs, hyperbolic random graphs, and geometric inhomogeneous random graphs as special cases. Our main result is that every random graph model in this general class has the same average distance as a Chung–Lu random graph, up to a factor of 1+o(1). This shows in particular that specific choices, such as taking the underlying geometry to be Euclidean, do not significantly influence the average distance. The proof also shows that every random graph model in our class has a giant component and polylogarithmic diameter with high probability.
... Let h : x → sup θ≥0 (θx − κ R (x)) be the Legendre transform of κ R , so the above considerations show that h(1) ≥ θ * /2 > 0 (note that this does not depend on n). By a Chernoff bound for subcritical Galton-Watson processes (see for example [27,Lemma 1.9]), for each i we have ...
Preprint
There are a number of well-known problems and conjectures about partitioning graphs to satisfy local constraints. For example, the majority colouring conjecture of Kreutzer, Oum, Seymour, van der Zypen and Wood states that every directed graph has a 3-colouring such that for every vertex v, at most half of the out-neighbours of v have the same colour as v. As another example, the internal partition conjecture, due to DeVos and to Ban and Linial, states that for every d, all but finitely many d-regular graphs have a partition into two nonempty parts such that for every vertex v, at least half of the neighbours of v lie in the same part as v. We prove several results in this spirit: in particular, two of our results are that the majority colouring conjecture holds for Erd\H{o}s-R\'enyi random directed graphs (of any density), and that the internal partition conjecture holds if we permit a tiny number of "exceptional vertices". Our proofs involve a variety of techniques, including several different methods to analyse random recolouring processes. One highlight is a "personality-changing" scheme: we "forget" certain information based on the state of a Markov chain, giving us more independence to work with.
... Our work also related to the vast literature on SIR epidemics on random networks, to name just a few (Janson et al., 2014;Ball & Sirl, 2016;Stegehuis et al., 2016;Janson et al., 2014;Pastor-Satorras et al., 2015;Ball et al., 2009;Kiss et al., 2017;Ball et al., 2014;Britton et al., 2007;Draief & Massouli, 2010). ...
Article
Full-text available
We consider a heterogeneous Susceptible–Infected–Recovered epidemic model, calibrated to the COVID-19 pandemic characteristics. We study the equilibrium of a voluntary social distancing game on a network of individuals subject to epidemic risk. We quantify the absolute and relative utility gaps across age cohorts. We further introduce life insurance in the model, which serves to mitigate the loss in the severe individual outcomes. We find that in most cases, insurance decreases the risk of contagion in the network because more individuals can be incentivized to self-isolate. On the other hand, in the case when the insurer does not have sufficient information on the self-isolation strategy of the individual, insurance can introduce moral hazard. We find that when premiums cannot be sufficiently differentiated, individuals may choose not to socially distance. This illustrates the importance of disclosing self-isolation strategies to the insurer, or inferring these strategies for the various types. Partial monitoring of social distancing strategy can mitigate the two problems of moral hazard and adverse selection.
... MSC/2020/000039. be dictated by existing dynamical epidemiological model (Draief and Massoulie, 2009), (Bailey et al., 1975). In this regard, Bernoulli developed the first known mathematical epidemiological model in eighteenth century that captures the spreading of smallpox (Bernoulli, 1760). ...
Article
Full-text available
Objective of this present study is to predict the COVID-19 trajectories in terms of infected population of Indian states. In this work, a state interaction network of sixteen Indian states with highest number of infected caseload is considered, based on networked Susceptible-Exposed-Infected-Recovered (SEIR) epidemic model. An intervention term has been introduced in order to capture the effect of lockdown with different stringencies at different periods of time. The model has been fitted using least absolute shrinkage and selection operator (LASSO). Machine learning methods have been used to train the parameters of the model, cross-validate the data, and predict the parameters. The predictions of infected population for each of the sixteen states have been shown using data considered from January 1, 2021 till writing this manuscript on June 25, 2021. Finally, the effectiveness of the model is manifested by the calculated mean error and confidence interval.
... The transmission dynamics of almost all the viral disease can be modelled by existing dynamical epidemiological models [4], [5]. In this regard, Bernoulli developed the first known epidemiological model in eighteenth century to capture the transmission dynamics of smallpox [6]. ...
Conference Paper
Full-text available
In this paper, a novel deterministic discretetime networked SUVIRD (Susceptible-Undetected positive-Vaccinated-Infected-Recovered-Deceased) model has been proposed to capture the spreading dynamics of COVID-19 in fifteen most affected states of India. Using the proposed model, investigation has been carried out to analyse the role of lockdown as well as vaccination to minimize the spread of the disease during the second wave in India. In this connection, to capture the stringency of lockdown a time varying intervention term (η[t]) has been introduced and a distance-based adjacency matrix has been constructed to demonstrate the interconnection between states. Basic reproduction number (R0) has been computed for each state using the estimated rate parameters. Eventually, an analogy has been drawn between the original cumulative caseload and the simulated cumulative infection using estimated parameters.
... Put simply, even when people are in full recognition of the impact on the heath system and health outcomes of having a large outbreak, their decisions will lead to worse utility than in the social optimum. Related literature Our work is part of the vast literature on SIR epidemics on random networks, to name just a few [8][9][10][11][12][13][14]17,18,20,31,33,38,40,44]. We continue on the same line as [31] who study the SIR epidemics dynamics in the configuration model. ...
Article
Full-text available
We study a multi-type SIR epidemic process within a heterogeneous population that interacts through a network. We base social contact on a random graph with given vertex degrees, and we give limit theorems on the fraction of infected individuals. For given social distancing individual strategies, we establish the epidemic reproduction number R0R0\mathfrak {R}_0, which can be used to identify network vulnerability and inform vaccination policies. In the second part of the paper, we study the equilibrium of the social distancing game. Individuals choose their social distancing level according to an anticipated global infection rate, which must equal the actual infection rate following their choices. We give conditions for the existence and uniqueness of an equilibrium. In the case of random regular graphs, we show that voluntary social distancing will always be socially sub-optimal.
... Over the past two decades, both to address the discrepancies found in prior epidemic forecasts, and to better model spreading processes of computer viruses over communication networks, there has been an extensive study of epidemic processes evolving over complex network structures; see for example [11], [12], [13], [14], and from a controls perspective [15]. 1 To account for network structure among members of a population, an agent-based perspective of epidemic processes is taken where each agent is represented by a node in the network, and the edges in the network between nodes represent the strength of the interaction between agents. Nodes in the network may represent either individuals or subgroups in the larger population. ...
Preprint
We present an epidemiological compartment model, SAIR(S), that explicitly captures the dynamics of asymptomatic infected individuals in an epidemic spread process. We first present a group model and then discuss networked versions. We provide an investigation of equilibria and stability properties for these models, and present simulation results illustrating the effects of asymptomatic-infected individuals on the spread of the disease. We also discuss local isolation effects on the epidemic dynamics in terms of the networked models. Finally, we provide initial parameter estimation results based on simple least-squares approaches and local test-site data.\par Keywords: Epidemic dynamics, networks, data-informed modeling, stability analysis, parameter estimation
... The fixed-radius model is a natural generalization of covering Euclidean space with balls, though this generalization to metric spaces has apparently has not been studied before. The growth model in our simple form has also apparently not been studied, though it can be regarded as an extremely basic model for the spread of information or the spread of an epidemic, a field with a huge literature studying models on graphs or Euclidean space [11,16,21]. A related growth model in two dimensions, where seeds arrive (instead of as a constant-rate process) as a Poisson process whose rate is the current occupied area, is studied in [6,9]. ...
Preprint
Full-text available
Simple random coverage models, well studied in Euclidean space, can also be defined on a general compact metric space. By analogy with the geometric models, and with the discrete coupon collector's problem and with cover times for finite Markov chains, one expects a "weak concentration" bound for the distribution of the cover time to hold under minimal assumptions. We give two such results, one for random fixed-radius balls and the other for sequentially arriving randomly-centered and deterministically growing balls. Each is in fact a simple application of a different more general bound, the former concerning coverage by i.i.d. random sets with arbitrary distribution, and the latter concerning hitting times for Markov chains with a strong monotonicity property. The growth model seems generally more tractable, and we record some basic results and open problems for that model.
... Local bifurcation analysis has been performed on the man-environment-man and SIR models [17,18], while other work has used Lyapunov functions to determine endemic equilibria for SIRS and SEIR models [19,20] or has considered a mean-field approach [6,[21][22][23]. Recent work has addressed how the network structure influences the spread of disease via the initial conditions and network topologies [4,5] and the ways in which epidemics spread on random networks [3]. Analytical results have also been obtained for the case of two competing (or promoting) diseases on a network [18,24]. ...
Article
Full-text available
Effective intervention strategies for epidemics rely on the identification of their origin and on the robustness of the predictions made by network disease models. We introduce a Bayesian uncertainty quantification framework to infer model parameters for a disease spreading on a network of communities from limited, noisy observations; the state-of-the-art computational framework compensates for the model complexity by exploiting massively parallel computing architectures. Using noisy, synthetic data, we show the potential of the approach to perform robust model fitting and additionally demonstrate that we can effectively identify the disease origin via Bayesian model selection. As disease-related data are increasingly available, the proposed framework has broad practical relevance for the prediction and management of epidemics.
... Early work on (SIR) epidemics focused on the homogeneous population setting, while more recent works model the interactions between individuals via an underlying network. Detailed overviews of different epidemic models over networks are given in [Barrat et al., 2008, Draief and Massouli, 2010, Pastor-Satorras et al., 2015, Nowzari et al., 2016, Mei et al., 2017. ...
Preprint
Motivated by the ongoing pandemic COVID-19, we propose a closed-loop framework that combines inference from testing data, learning the parameters of the dynamics and optimal resource allocation for controlling the spread of the susceptible-infected-recovered (SIR) epidemic on networks. Our framework incorporates several key factors present in testing data, such as high risk individuals are more likely to undergo testing and infected individuals potentially act as asymptomatic carriers of the disease. We then present two tractable optimization problems to evaluate the trade-off between controlling the growth-rate of the epidemic and the cost of non-pharmaceutical interventions (NPIs). Our results provide compelling insights for policy-makers, including the significance of early testing and the emergence of a second wave of infections if NPIs are prematurely withdrawn.
... Proposition A.1. (Proposition 5.2 in [17]) Let Y be a Poisson process with unit rate (i.e. Y (t) ∼ Poisson(t) for all t ≥ 0). ...
Preprint
Full-text available
The Fiedler vector of a graph, namely the eigenvector corresponding to the second smallest eigenvalue of a graph Laplacian matrix, plays an important role in spectral graph theory with applications in problems such as graph bi-partitioning and envelope reduction. Algorithms designed to estimate this quantity usually rely on a priori knowledge of the entire graph, and employ techniques such as graph sparsification and power iterations, which have obvious shortcomings in cases where the graph is unknown, or changing dynamically. In this paper, we develop a framework in which we construct a stochastic process based on a set of interacting random walks on a graph and show that a suitably scaled version of our stochastic process converges to the Fiedler vector for a sufficiently large number of walks. Like other techniques based on exploratory random walks and on-the-fly computations, such as Markov Chain Monte Carlo (MCMC), our algorithm overcomes challenges typically faced by power iteration based approaches. But, unlike any existing random walk based method such as MCMCs where the focus is on the leading eigenvector, our framework with interacting random walks converges to the Fiedler vector (second eigenvector). We also provide numerical results to confirm our theoretical findings on different graphs, and show that our algorithm performs well over a wide range of parameters and the number of random walks. Simulations results over time varying dynamic graphs are also provided to show the efficacy of our random walk based technique in such settings. As an important contribution, we extend our results and show that our framework is applicable for approximating not just the Fiedler vector of graph Laplacians, but also the second eigenvector of any time reversible Markov Chain kernel via interacting random walks.
... Originating from epidemiology, the study of epidemic spreading processes has become an attractive area due to its wide applications in propagation phenomena, such as computer virus dissemination and information diffusion Draief and Massouli (2010); Kephart and White (1991); Nowzari et al. (2016). In order to mathematically model such a process, multitudes of epidemic models incorporating different compartments have been proposed, e.g., susceptible-infected-susceptible (SIS), susceptibleinfected-recovered-susceptible (SIRS), and susceptibleexposed-infected-susceptible (SEIR) Kermack and McKendrick (1927); Li et al. (2017). ...
Preprint
Full-text available
Networked epidemic models have been widely adopted to describe propagation phenomena. The endemic equilibrium of these models is of great significance in the field of viral marketing, innovation dissemination, and information diffusion. However, its stability conditions have not been fully explored. In this paper we study the stability of the endemic equilibrium of a networked Susceptible-Infected-Susceptible (SIS) epidemic model with heterogeneous transition rates in a discrete-time manner. We show that the endemic equilibrium, if it exists, is asymptotically stable for any nontrivial initial condition. Under mild assumptions on initial conditions, we further prove that during the spreading process there exists no overshoot with respect to the endemic equilibrium. Finally, we conduct numerical experiments on real-world networks to demonstrate our results.
... This manuscript is for review purposes only. studying finite populations [21]. ...
... One such statistical quantity of interest is the mean number of distinct sites S W (t) visited by W random walkers in t discrete time steps on a network with N nodes. This is relevant for problems related to (mis-)information and contagion spreading and search problems on networks [16]. ...
Preprint
Full-text available
Random walks on discrete lattices are fundamental models that form the basis for our understanding of transport and diffusion processes. For a single random walker on complex networks, many of its properties such as the mean first passage time and cover time are known. However, recent applications such as search engines and recommender systems involve multiple random walkers on complex networks. In this work, based on numerical simulations, we show that the fraction of nodes of scale-free network not visited by W random walkers in time t has a stretched exponential form independent of the details of the network and number of walkers. This leads to a power-law relation between nodes not visited by W walkers and by one walker. The problem of finding the distinct nodes visited by W walkers, effectively, can be reduced to that of a single walker. The robustness of the results is demonstrated by verifying them on four different real-world networks that approximately display scale-free structure.
... More recently, the bootstrap percolation process has been investigated in the context of random graphs. This is partly motivated by the increasing interest in dynamical processes taking place over large-scale complex systems such as technological, biological and social networks whose irregular structure is better captured by random graphs models (see [4] for a comprehensive introduction to epidemics in complex networks). For example, in the case of social networks, bootstrap percolation may serve as a primitive model for the spread of ideas, rumors and trends among individuals. ...
Article
We consider the Erdös-Rényi random graph G n,p and we analyze the simple irreversible epidemic process on the graph, known in the literature as bootstrap percolation. We give a quantitative version of some results by Janson et al. (2012), providing a fine asymptotic analysis of the final size A * n of active nodes, under a suitable super-critical regime. More specifically, we establish large deviation principles for the sequence of random variables { n−A * n f (n) } n≥1 with explicit rate functions and allowing the scaling function f to vary in the widest possible range. MSC 2010 Subject Classification: 05C80, 60K35, 60F10.
... In computation theory, mathematics, physics, complexity theory, theoretical biology and material sciences, it is known as a twodimensional, two-state Asynchronous Cellular Automaton (ACA) with extended Moore neighborhoods and exponential waiting times [11]. Related models appeared in epidemiology [12,19], economics [21], engineering and computer sciences [14,25]. Mathematically, all of them fall in the general area of interacting particle systems (IPS) [26,27]. ...
Conference Paper
We consider an agent-based distributed algorithm with exponentially distributed waiting times in which agents with binary states interact locally over a geometric graph, and based on this interaction and on the value of a common intolerance threshold τ, decide whether to change their states. This model is equivalent to an Asynchronous Cellular Automaton (ACA) with extended Moore neighborhoods, a zero-temperature Ising model with Glauber dynamics, or a Schelling model of self-organized segregation in an open system, and has applications in the analysis of social and biological networks, and spin glasses systems. We prove a shape theorem for the spread of the “affected” nodes during the process dynamics and show that in the steady state, for τ ∈ (τ*,1−τ*) ∖ {1/2}, where τ* ≈ 0.488, the size of the “mono-chromatic region” at the end of the process is at least exponential in the size of the local neighborhood of interaction with probability approaching one as N grows. Combined with previous results on the expected size of the monochromatic region that provide a matching upper bound, this implies that in the steady state the size of the monochromatic region of any agent is exponential with high probability for the mentioned interval of τ. The shape theorem is based on a novel concentration inequality for the spreading time, and provides a precise geometrical description of the process dynamics. The result on the size of the monochromatic region considerably extends our understanding of the steady state. Showing convergence with high probability, it rules out the possibility that only a small fraction of the nodes are eventually contained in large monochromatic regions, which was left open by previous works.
... In computation theory, mathematics, physics, complexity theory, theoretical biology and material sciences, it is known as a two-dimensional, two-state Asynchronous Cellular Automaton (ACA) with extended Moore neighborhoods and exponential waiting times [11]. Related models appeared in epidemiology [18,12], economics [21], engineering and computer sciences [25,14]. Mathematically, all of them fall in the general area of interacting particle systems [26,27]. ...
Article
We consider a long-range interacting particle system in which binary particles are located at the integer points of a flat torus. Based on the interactions with other particles in its "neighborhood" and on the value of a common intolerance threshold τ\tau, every particle decides whether to change its state after an independent and exponentially distributed waiting time. This is equivalent to a Schelling model of self-organized segregation in an open system, a zero-temperature Ising model with Glauber dynamics, or an Asynchronous Cellular Automaton (ACA) with extended Moore neighborhoods. We first prove a shape theorem for the spread of the "affected" nodes during the process dynamics. Second, we show that when the process stops, for all τ(τ,1τ){1/2}{\tau \in (\tau^*,1-\tau^*) \setminus \{1/2\}} where τ0.488{\tau^* \approx 0.488}, and when the size of the neighborhood of interaction N is sufficiently large, every particle is contained in a large "monochromatic region" of size exponential in N, almost surely. When particles are placed on the infinite lattice Z2\mathbb{Z}^2 rather than on a flat torus, for the values of τ\tau mentioned above, sufficiently large N, and after a sufficiently long evolution time, every particle is contained in a large monochromatic region of size exponential in N, almost surely.
Chapter
Epidemics start within a network because of the existence of epidemic sources that spread information over time to other nodes. Data about the exact contagion pattern among nodes is often not available, besides a simple snapshot characterizing nodes as infected, or not. Thus, a fundamental problem in network epidemic is identifying the set of source nodes after the epidemic has reached a significant fraction of the network. This work tackles the multiple source detection problem by using graph neural network model to classify nodes as being the source of the epidemic. The input to the model (node attributes) are novel epidemic information in the k-hop neighborhoods of the nodes. The proposed framework is trained and evaluated under different network models and real networks and different scenarios, and results indicate different trade-offs. In a direct comparison with prior works, the proposed framework outperformed them in all scenarios available for comparison.
Book
This book is a general introduction to the statistical analysis of networks, and can serve both as a research monograph and as a textbook. Numerous fundamental tools and concepts needed for the analysis of networks are presented, such as network modeling, community detection, graph-based semi-supervised learning and sampling in networks. The description of these concepts is self-contained, with both theoretical justifications and applications provided for the presented algorithms. Researchers, including postgraduate students, working in the area of network science, complex network analysis, or social network analysis, will find up-to-date statistical methods relevant to their research tasks. This book can also serve as textbook material for courses related to the statistical approach to the analysis of complex networks. In general, the chapters are fairly independent and self-supporting, and the book could be used for course composition “à la carte”. Nevertheless, Chapter 2 is needed to a certain degree for all parts of the book. It is also recommended to read Chapter 4 before reading Chapters 5 and 6, but this is not absolutely necessary. Reading Chapter 3 can also be helpful before reading Chapters 5 and 7. As prerequisites for reading this book, a basic knowledge in probability, linear algebra and elementary notions of graph theory is advised. Appendices describing required notions from the above mentioned disciplines have been added to help readers gain further understanding.
Article
The Fiedler vector of a graph, namely the eigenvector corresponding to the second smallest eigenvalue of a graph Laplacian matrix, plays an important role in spectral graph theory with applications in problems such as graph bi-partitioning and envelope reduction. Algorithms designed to estimate this quantity usually rely on a priori knowledge of the entire graph, and employ techniques such as graph sparsification and power iterations, which have obvious shortcomings in cases where the graph is unknown, or changing dynamically. In this paper, we develop a framework in which we construct a stochastic process based on a set of interacting random walks on a graph and show that a suitably scaled version of our stochastic process converges to the Fiedler vector for a sufficiently large number of walks. Like other techniques based on exploratory random walks and on-the-fly computations, such as Markov Chain Monte Carlo (MCMC), our algorithm overcomes challenges typically faced by power iteration based approaches. But, unlike any existing random walk based method such as MCMCs where the focus is on the leading eigenvector, our framework with interacting random walks converges to the Fiedler vector (second eigenvector). We also provide numerical results to confirm our theoretical findings on different graphs, and show that our algorithm performs well over a wide range of parameters and the number of random walks. Simulations results over time varying dynamic graphs are also provided to show the efficacy of our random walk based technique in such settings. As an important contribution, we extend our results and show that our framework is applicable for approximating not just the Fiedler vector of graph Laplacians, but also the second eigenvector of any time reversible Markov Chain kernel via interacting random walks. To the best of our knowledge, our attempt to approximate the second eigenvector of any time reversible Markov Chain using random walks is the first of its kind, opening up possibilities to achieving approximations of higher level eigenvectors using random walks on graphs.
Article
In multi-server systems, a classical job assignment algorithm works as follows: at the arrival of each job, pick d servers independently and uniformly at random and send the job to the least loaded server among the d servers. This model is known as the power-of-d choices algorithm. In this paper, we analyze a variant of this algorithm, where d servers are sampled through d independent non-backtracking random walks on a k -regular graph. The random walkers are periodically reset to independent uniform random positions. Under some assumptions on the underlying graph, we show that the system dynamics under this new algorithm converges to the solution of a deterministic ordinary differential equation (ODE), which is the same ODE as the classical power-of-d choices. We also show that the new algorithm stablizes the system, and the stationary distribution of the system converges to the stationary solution of the ODE. The new scheme can be considered as a derandomized version of power-of- d choices as it reduces the use of randomness while maintaining the performance of power-of- d choices.
Article
Modeling cyber-attacks is a very attractive area of research because of its practical importance. However, most of the related research in the literature does not consider the simultaneous (or coordinated) attacks, which, in fact, is an important attack instrument in practice. This is mainly because of the complicated evolution of cyber-attacks over networks. In this paper, we propose a novel model, which can accommodate different types of simultaneous attacks with possible heterogeneous compromise probabilities. Our results show that simultaneous attacks have a significant effect on the reliability/dynamics of network systems. In particular, we present a sufficient condition for the epidemics dying out over the network, and upper bounds for the time to extinction. We also provide upper bounds for compromise probabilities of network systems when the evolution enters the quasi-equilibrium state. The effects of strength of simultaneous attacks and heterogeneity among successful attack probabilities on epidemic spreading are studied as well. The theoretical results are further validated by the simulation evidence.
Preprint
Full-text available
We consider evoSIR, a variant of the SIR model, on Erd\H os-Renyi random graphs in which susceptibles with an infected neighbor break that connection at rate ρ\rho and rewire to a randomly chosen individual. We compute the critical infection rate λc\lambda_c and the probability of a large epidemic by showing that they are the same for the delSIR model in which SIS-I connections are deleted instead of rewired. The final size of a large delSIR epidemic has a continuous transition. Simulations suggest that the final size of a large evoSIR epidemic is discontinuous at λc\lambda_c.
Chapter
Social networks are important part of society. Analysis of information flows and activity of users may provide different opportunities for scientist. However, existing methods and models for prediction of information flow couldn’t provide sufficient level of accuracy. Proposed method of information flow analysis is based on machine learning algorithms. First stage of research aims to analyze the features, that could be useful for describing activity of users and information in social networks. Data was assembled from social network “VKontakte”. Data includes main information about selected features for each person. On the second stage were conducted experiments, based on machine learning methods. Three main approaches were implemented in research - naive Bayes classifier, logistic regression analysis and k-means clustering. A comparison of different machine learning methods was conducted on last stage. Best results of predictions were reached by usage of naive Bayes classifier. Proposed method for prediction of information flow is usable for organizing users’ protection from different type of information attacks.
Article
Research in graph signal processing (GSP) aims to develop tools for processing data defined on irregular graph domains. In this paper, we first provide an overview of core ideas in GSP and their connection to conventional digital signal processing, along with a brief historical perspective to highlight how concepts recently developed in GSP build on top of prior research in other areas. We then summarize recent advances in developing basic GSP tools, including methods for sampling, filtering, or graph learning. Next, we review progress in several application areas using GSP, including processing and analysis of sensor network data, biological data, and applications to image processing and machine learning.
ResearchGate has not been able to resolve any references for this publication.