Article

Vital nodes identification in complex networks

Authors:
  • The Institute of Service-Oriented Manufacturing
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Real networks exhibit heterogeneous nature with nodes playing far different roles in structure and function. To identify vital nodes is thus very significant, allowing us to control the outbreak of epidemics, to conduct advertisements for e-commercial products, to predict popular scientific publications, and so on. The vital nodes identification attracts increasing attentions from both computer science and physical societies, with algorithms ranging from simply counting the immediate neighbors to complicated machine learning and message passing approaches. In this review, we clarify the concepts and metrics, classify the problems and methods, as well as review the important progresses and describe the state of the art. Furthermore, we provide extensive empirical analyses to compare well-known methods on disparate real networks, and highlight the future directions. In despite of the emphasis on physics-rooted approaches, the unification of the language and comparison with cross-domain methods would trigger interdisciplinary solutions in the near future.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... (2) In most studies of SNGDM, trust degrees among individuals are mainly determined by their trust relationships within the social network, such as network centrality metrics [3,12]. In general, individuals with high centrality scores are assumed to gain greater trust from others [25][26][27]. While centrality-based approaches effectively capture structural influence within social networks [28], they often overlook the role of individual preferences in shaping trust degrees. ...
... In this paper, the edge ( , ) ∈ represents not only the trust relationship from individual to but also the social influence from to . In social network analysis (SNA), centrality-based methods are employed to calculate social influence, considering those individuals with high centrality scores as influential ones [25][26][27]. For instance, in-degree centrality simply measures the number of incoming edges an entity receives [48]. ...
... Social network analysis (SNA) provides indispensable tools for evaluating individuals' social influence within a network. Foundational concepts, such as centrality measures, enable researchers to quantify the structural importance of individuals and their capacity to influence others [27,44,48]. ...
Article
Full-text available
In traditional group decision-making models, it is commonly assumed that all decision makers exert equal influence on one another. However, in real-world social networks, such as Twitter and Facebook, certain individuals—known as top persuaders—hold a disproportionately large influence over others. This study formulates the consensus-reaching problem in social network group decision making by introducing a novel framework for predicting top persuaders. Building on social network theories, we develop a social persuasion model that integrates social influence and social status to quantify individuals’ persuasive power more comprehensively. Subsequently, we propose a new CRP that leverages the influence of top persuaders. Our simulations and comparative analyses demonstrate that: (1) increasing the number of top persuaders substantially reduces the iterations required to achieve consensus; (2) establishing trust relationships between top persuaders and other individuals accelerates the consensus process; and (3) top persuaders retain a high and stable level of influence throughout the entire CRP rounds. Our research provides practical insights into identifying and strategically guiding top persuaders to enhance the efficiency in consensus reaching and reduce social management costs within social networked environments.
... For example, Korn et al. [25] illustrated that the h-index can make a well-balanced mix of traditional centrality measures. Lü et al. [26,27] also demonstrated the effectivity of the h-index in evaluating the influence of the nodes in many real world networks. However, we find that the h-index method always assigns the same value to many different nodes, which makes it difficult to distinguish the real influences of these nodes. ...
... In order to describe the k-core decomposition clearly, we take one simple network [14,17,27] for example. In Fig. 1, node 1 and node 16 own the same degree value with 8 k  , so the degree centrality cannot distinguish which one is much more influential effectively. ...
... Recently, the h-index [24] (also known as Hirsch index) is introduced to identify influential spreaders [25][26][27] in networks. The h-index of node i is defined as the largest value h such that i has at least h neighbors for each with a degree no less than h [25]. ...
Preprint
Identifying influential nodes in complex networks has received increasing attention for its great theoretical and practical applications in many fields. Traditional methods, such as degree centrality, betweenness centrality, closeness centrality, and coreness centrality, have more or less disadvantages in detecting influential nodes, which have been illustrated in related literatures. Recently, the h-index, which is utilized to measure both the productivity and citation impact of the publications of a scientist or scholar, has been introduced to the network world to evaluate a node's spreading ability. However, this method assigns too many nodes with the same value, which leads to a resolution limit problem in distinguishing the real influence of these nodes. In this paper, we propose a local h-index centrality (LH-index) method for identifying and ranking influential nodes in networks. The LH-index method simultaneously takes into account of h-index values of the node itself and its neighbors, which is based on the idea that a node connects to more influential nodes will also be influential. According to the simulation results with the stochastic Susceptible-Infected-Recovered (SIR) model in four real world networks and several simulated networks, we demonstrate the effectivity of the LH-index method in identifying influential nodes in networks.
... How to effectively disentangle truth from falsehood to protect individuals from malicious deception is a critical problem, especially for the companies who provide information services or products online [4,5,6,7]. Reputation systems arose as a result of the need for Internet users to gain trust in the individuals they transact with online [8,9]. Additionally, reputation systems enable users and customers to better understand the provided information, products, and services [10,11]. ...
... (ii) Estimate the quality of each object with equation (2), where R i can be IR i (equation (4)), CR i (equation (7)) and IARR i (equation (8)), while IARR2 can be calculated based on IARR according to equation (9). ...
... Due to the user's personal bias of rating, we proposed an iterative balance model to eliminate the bias in order to better quantify the user's reputation. The model considers the user magnitude to meet equation (9), and its process can be described as follows: ...
Preprint
The ongoing rapid development of the e-commercial and interest-base websites make it more pressing to evaluate objects' accurate quality before recommendation by employing an effective reputation system. The objects' quality are often calculated based on their historical information, such as selected records or rating scores, to help visitors to make decisions before watching, reading or buying. Usually high quality products obtain a higher average ratings than low quality products regardless of rating biases or errors. However many empirical cases demonstrate that consumers may be misled by rating scores added by unreliable users or deliberate tampering. In this case, users' reputation, i.e., the ability to rating trustily and precisely, make a big difference during the evaluating process. Thus, one of the main challenges in designing reputation systems is eliminating the effects of users' rating bias on the evaluation results. To give an objective evaluation of each user's reputation and uncover an object's intrinsic quality, we propose an iterative balance (IB) method to correct users' rating biases. Experiments on two online video-provided Web sites, namely MovieLens and Netflix datasets, show that the IB method is a highly self-consistent and robust algorithm and it can accurately quantify movies' actual quality and users' stability of rating. Compared with existing methods, the IB method has higher ability to find the "dark horses", i.e., not so popular yet good movies, in the Academy Awards.
... large portion of the system. Following the seminal works by Kempe et al. [12] and Kitsak et al. [11], a large number of papers have attempted to identify the influencers based on network structural analysis and centrality metrics [9,11,13,14,15,16,17,18,19] (see [9] for a review). In this stream of works, the ground-truth influence of a given "seed" node (or equivalently, its spreading ability) is defined as the average number of nodes that are eventually reached by independent diffusion processes initiated by that seed node [9]. ...
... large portion of the system. Following the seminal works by Kempe et al. [12] and Kitsak et al. [11], a large number of papers have attempted to identify the influencers based on network structural analysis and centrality metrics [9,11,13,14,15,16,17,18,19] (see [9] for a review). In this stream of works, the ground-truth influence of a given "seed" node (or equivalently, its spreading ability) is defined as the average number of nodes that are eventually reached by independent diffusion processes initiated by that seed node [9]. ...
... Following the seminal works by Kempe et al. [12] and Kitsak et al. [11], a large number of papers have attempted to identify the influencers based on network structural analysis and centrality metrics [9,11,13,14,15,16,17,18,19] (see [9] for a review). In this stream of works, the ground-truth influence of a given "seed" node (or equivalently, its spreading ability) is defined as the average number of nodes that are eventually reached by independent diffusion processes initiated by that seed node [9]. In the following, we refer to this property as node late-time influence, and we refer to nodes with large late-time influence as late-time influencers. ...
Preprint
Influential nodes in complex networks are typically defined as those nodes that maximize the asymptotic reach of a spreading process of interest. However, for practical applications such as viral marketing and online information spreading, one is often interested in maximizing the reach of the process in a short amount of time. The traditional definition of influencers in network-related studies from diverse research fields narrows down the focus to the late-time state of the spreading processes, leaving the following question unsolved: which nodes are able to initiate large-scale spreading processes, in a limited amount of time? Here, we find that there is a fundamental difference between the nodes -- which we call "fast influencers" -- that initiate the largest-reach processes in a short amount of time, and the traditional, "late-time" influencers. Stimulated by this observation, we provide an extensive benchmarking of centrality metrics with respect to their ability to identify both the fast and late-time influencers. We find that local network properties can be used to uncover the fast influencers. In particular, a parsimonious, local centrality metric (which we call social capital) achieves optimal or nearly-optimal performance in the fast influencer identification for all the analyzed empirical networks. Local metrics tend to be also competitive in the traditional, late-time influencer identification task.
... Many topological properties have been proposed and measured to characterize complex networks [1]. Among them, centrality measures (such as degree, betweenness [2] or eigenvector [3] centrality) aim at quantifying the relative importance of individual vertices in the overall topology; they are often related to the behavior of processes unfolding on the complex network structure, such as spreading or diffusion [4][5][6]. Prominent in this context is the K-core decomposition, a recursive pruning procedure [7] which iteratively peels off nodes from the network (K-shells), leaving subsets (K-cores) which are increasingly dense and mutually interconnected. The Kcore decomposition proceeds as follows: Starting with the full graph, nodes with degree q = 1 are removed, repeating this operation iteratively until only nodes with degree q ≥ 2 remain. ...
... A H(n)-shell is the set of nodes with n-th order Hirsch index equal to some value h; a H(n)-core is defined as the set of all nodes with n-th order Hirsch index larger than or equal to a given value h. The relevance of this organization has been discussed in Ref. [9], where it is argued that, in some instances, the H(n) index of a node can be a better predictor of the influence of a node [6] in epidemic spreading than the degree or the coreness. ...
... In recent years a lot of activity has regarded the identification of influential spreaders in networks [4,6,33], i.e. the nodes which maximize the extent of spreading events initiated by them. The goal is to find which of the many possible centralities based solely on the network topology (such as degree, betweenness, K-core, etc.) is most corre-lated with the actual spreding power of nodes. ...
Preprint
The generalized H(n) Hirsch index of order n has been recently introduced and shown to interpolate between the degree and the K-core centrality in networks. We provide a detailed analytical characterization of the properties of sets of nodes having the same H(n), within the annealed network approximation. The connection between the Hirsch indices and the degree is highlighted. Numerical tests in synthetic uncorrelated networks and real-world correlated ones validate the findings. We also test the use of the Hirsch index for the identification of influential spreaders in networks, finding that it is in general outperformed by the recently introduced Non-Backtracking centrality.
... Although such methods can achieve high solution quality, they are computationally expensive. Topology-based measures, where influential users are identified by their ranking relative to centrality measures such as discounted degree centrality [8] or PageRank centrality [9], avoid this expensive overhead, but the quality of their solutions may be slightly lower or less robust to the network structure. ...
... For this comparative study, we explore a range of widely used centrality measures [9,34]. The six centrality measures used in our experiments are briefly summarized in Table 3. ...
Article
Full-text available
This paper investigates the effectiveness of centrality measures for the influence maximization problem in competitive social networks (SNs). We consider a framework, which we call “I-Game” (Influence Game), to conceptualize the adoption of competing products as a strategic game. Firms, as players, aim to maximize the adoption of their products, considering the possible rational choice of their competitors under a competitive diffusion model. They independently and simultaneously select their seeds (initial adopters) using an algorithm from a finite strategy space of algorithms. Since strategies may agree to select similar seeds, it is necessary to include an initial seed tie-breaking rule into the game model of the I-Game. We perform an empirical study in a two-player game under the competitive independent cascade model with three different seed-tie-breaking rules using four real-world SNs. The objective is to compare the performance of centrality-based strategies with some state-of-the-art algorithms used in the non-competitive influence maximization problem. The experimental results show that Nash equilibria vary according to the SN, seed-tie-breaking rules, and budgets. Moreover, they reveal that classical centrality measures outperform the most effective propagation-based algorithms in a competitive diffusion setting in three graphs. We attempt to explain these results by introducing a novel metric, the Early Influence Diffusion (EID) index, which measures the early influence diffusion of a strategy in a non-competitive setting. The EID index may be considered a valuable metric for predicting the effectiveness of a strategy in a competitive influence diffusion setting.
... This distinction is also relevant to the "influence maximization" or "seeding" problem, i.e., the choice of the best subsets of network members with whom to initiate the spread of a new idea, product, or behavior in order to maximize the likelihood of large-scale adoption [110]. The idea that a few influentials may have a disproportionate effect on spread has inspired a series of studies on social hubs, trendsetters, influencers, and influence maximization in computer science [114], management science [110,115], and network science [116], among others. Across these diverse domains, researchers have developed various network-based algorithms to identify effective influencers and, in some cases, validate them via simulations under different scenarios [116] and field experiments [4,115,117]. ...
... The idea that a few influentials may have a disproportionate effect on spread has inspired a series of studies on social hubs, trendsetters, influencers, and influence maximization in computer science [114], management science [110,115], and network science [116], among others. Across these diverse domains, researchers have developed various network-based algorithms to identify effective influencers and, in some cases, validate them via simulations under different scenarios [116] and field experiments [4,115,117]. Sometimes, however, greater spread is instead achieved by building a critical mass of easily influenced people rather than by targeting a few central influencers [118][119][120]. ...
Preprint
Full-text available
Understanding the collective dynamics behind the success of ideas, products, behaviors, and social actors is critical for decision-making across diverse contexts, including hiring, funding, career choices, and the design of interventions for social change. Methodological advances and the increasing availability of big data now allow for a broader and deeper understanding of the key facets of success. Recent studies unveil regularities beneath the collective dynamics of success, pinpoint underlying mechanisms, and even enable predictions of success across diverse domains, including science, technology, business, and the arts. However, this research also uncovers troubling biases that challenge meritocratic views of success. This review synthesizes the growing, cross-disciplinary literature on the collective dynamics behind success and calls for further research on cultural influences, the origins of inequalities, the role of algorithms in perpetuating them, and experimental methods to further probe causal mechanisms behind success. Ultimately, these efforts may help to better align success with desired societal values.
... Subsequent research indicates that the core nodes as identified by the k-shell decomposition are the most influential spreaders [6]. Algorithms based on other centrality measures have been proposed to improve the accuracy of identifying influential spreaders [7][8][9][10]. They include the neighborhood coreness [11], improved eigenvector centrality [12,13], Hindex [14,15] and nonbacktracking centrality [16]. ...
... In most studies on identifying influential spreaders so far, the networks are taken to be unweighted and undirected. Each edge is treated to be equivalent in its function, as in the centralities and ranking methods [10]. However, edges in a network could be quite different [21]. ...
Preprint
We propose an efficient and accurate measure for ranking spreaders and identifying the influential ones in spreading processes in networks. While the edges determine the connections among the nodes, their specific role in spreading should be considered explicitly. An edge connecting nodes i and j may differ in its importance for spreading from i to j and from j to i. The key issue is whether node j, after infected by i through the edge, would reach out to other nodes that i itself could not reach directly. It becomes necessary to invoke two unequal weights wij and wji characterizing the importance of an edge according to the neighborhoods of nodes i and j. The total asymmetric directional weights originating from a node leads to a novel measure si which quantifies the impact of the node in spreading processes. A s-shell decomposition scheme further assigns a s-shell index or weighted coreness to the nodes. The effectiveness and accuracy of rankings based on si and the weighted coreness are demonstrated by applying them to nine real-world networks. Results show that they generally outperform rankings based on the nodes' degree and k-shell index, while maintaining a low computational complexity. Our work represents a crucial step towards understanding and controlling the spread of diseases, rumors, information, trends, and innovations in networks.
... In this Section, we investigate the problem of identifying influential spreaders for coinfections. For single spreading processes this issue has attracted a lot of interest in recent years [26,27]. The problem is the following. ...
... It is clear that degree is positively correlated with ρ(p), but the detailed structure of the contact pattern makes in some cases centralities such as the k-core index, betweenness or eigenvalue or other centralities, better predictors of the spreading influence [27]. The mapping of SIR dynamics to bond percolation [28][29][30] allows, at the epidemic threshold p = p c , to identify the Non-Backtracking centrality [31] as the exact solution (i.e. a centrality perfectly correlated with the spreading influence) on locally tree-like networks [32]. ...
Preprint
Full-text available
The spread of an infectious disease can be promoted by previous infections with other pathogens. This cooperative effect can give rise to violent outbreaks, reflecting the presence of an abrupt epidemic transition. As for other diffusive dynamics, the topology of the interaction pattern of the host population plays a crucial role. It was conjectured that a discontinuous transition arises when there are relatively few short loops and many long loops in the contact network. Here we focus on the role of local clustering in determining the nature of the transition. We consider two mutually cooperative pathogens diffusing in the same population: an individual already infected with one disease has an increased probability of getting infected by the other. We look at how a disease obeying the susceptible-infected-removed dynamics spreads on contact networks with tunable clustering. Using numerical simulations we show that for large cooperativity the epidemic transition is always abrupt, with the discontinuity decreasing as clustering is increased. For large clustering strong finite size effects are present and the discontinuous nature of the transition is manifest only in large networks. We also investigate the problem of influential spreaders for cooperative infections, revealing that both cooperativity and clustering strongly enhance the dependence of the spreading influence on the degree of the initial seed.
... As a fundamental concept in social network analysis and complex networks, network centrality has received considerable attention from the scientific community [69]. The value of centrality metrics can be used to rank nodes or edges in networks [61], with an aim to identify important nodes or edges for different applications. In the past decades, a host of centrality measures were presented to describe and characterize the roles of nodes or edges in networks [2,3,8,15,17,56,83,91]. ...
Article
Full-text available
For random walks on graph G\mathcal{G} with n vertices and m edges, the mean hitting time HjH_{j} from a vertex chosen from the stationary distribution to vertex j measures the importance for j , while the Kemeny constant K\mathcal{K} is the mean hitting time from one vertex to another selected randomly according to the stationary distribution. In this article, we first establish a connection between the two quantities, representing K\mathcal{K} in terms of HjH_{j} for all vertices. We then develop an efficient algorithm estimating HjH_{j} for all vertices and K\mathcal{K} in nearly linear time of m . Moreover, we extend the centrality HjH_{j} of a single vertex to H(S) of a vertex set S , and establish a link between H(S) and some other quantities. We further study the NP-hard problem of selecting a group S of knk\ll n vertices with minimum H(S) , whose objective function is monotonic and supermodular. We finally propose two greedy algorithms approximately solving the problem. The former has an approximation factor (1kk11e)(1-\frac{k}{k-1}\frac{1}{e}) and O(kn3)O(kn^{3}) running time, while the latter returns a (1kk11eϵ)(1-\frac{k}{k-1}\frac{1}{e}-\epsilon) -approximation solution in nearly-linear time of m , for any parameter 0<ϵ<10{\lt}\epsilon{\lt}1 . Extensive experiment results validate the performance of our algorithms.
... The prediction of when, how and to which extent an epidemic outbreak will take place is one of the most important challenges of modern science with fundamental implications for public health [18]. For instance, the prediction of the expected number of cases once a disease is seeded in a given individual or group of subjects would enable the identification of the most influential spreaders [7,10,19] and contribute to develop methods for an efficient disease control via vaccination or other procedures. Specifically, we consider an SIR model whose dynamics has an absorbing state, i.e. the number of infected nodes goes to zero in a finite time [3,18]. ...
Article
Full-text available
Estimating the outcome of a given dynamical process from structural features is a key unsolved challenge in network science. This goal is hampered by difficulties associated with nonlinearities, correlations and feedbacks between the structure and dynamics of complex systems. In this work, we develop an approach based on machine learning algorithms that provides an important step towards understanding the relationship between the structure and dynamics of networks. In particular, it allows us to estimate from the network structure the outbreak size of a disease starting from a single node, as well as the degree of synchronicity of a system made up of Kuramoto oscillators. We show which topological features of the network are key for this estimation and provide a ranking of the importance of network metrics with much higher accuracy than previously done. For epidemic propagation, the k-core plays a fundamental role, while for synchronization, the betweenness centrality and accessibility are the measures most related to the state of an oscillator. For all the networks, we find that random forests can predict the outbreak size or synchronization state with high accuracy, indicating that the network structure plays a fundamental role in the spreading process. Our approach is general and can be applied to almost any dynamic process running on complex networks. Also, our work is an important step towards applying machine learning methods to unravel dynamical patterns that emerge in complex networked systems.
... Some research suggests that criteria derived from network structure can be used to select the most effective users. Research by Lü et al., (2016) and Das et al. (2018) focuses on metrics applied to rank each node in terms of its influence, known as centrality measures. These measures, derived from the network structure, inform the selection of the most influential users. ...
Article
The landscape of information access has evolved significantly over time, with the advent of search engines, social media platforms, and the widespread use of the internet. These developments have fostered a global communication network, resulting in intricate connections between individuals. Online social networks have emerged as key facilitators of social interaction, expediting the exchange of information and playing a pivotal role in content dissemination. Within these networks, certain individuals, termed as Key Players, wield considerable influence, profoundly impacting information diffusion. Thus, the identification of the most influential individuals within complex network structures stands as a crucial challenge. In this study, we employ modularity and eigenvector centrality metrics to designate nodes for initial activation, aiming at influence maximization in social networks. Visualization and analysis of the dataset are conducted using Gephi software, providing insights into the dynamics of the social network structure and facilitating the identification of key players.
... While the present works employs traditional graph algorithms, exploration of other algorithms for assessment of node influence (Chen et al., 2012;Hao et al., 2018;Liu et al., 2018;Lv et al., 2019;Raychaudhuri et al., 2020;Salavati et al., 2019;Srinivas and Rajendran, 2019;Wu et al., 2019;Zhang, 2014;Fei et al., 2018) could add further value. This is in alignment with the views of Lü et al. (2016) who outlines the need for development of centrality measures for identification of influential nodes. Moreover, establishing performance benchmarks for these measures could aid in their application in future research. ...
... The idea that a few influentials may have a disproportionate effect on spread has inspired a series of studies on social hubs, trendsetters, influencers, and influence maximization in computer science 111 , management science 103,112 , and network science 113 , among others. Across these diverse domains, researchers have developed various networkbased algorithms to identify effective influencers and, in some cases, validate them via simulations under different scenarios 113 and field experiments 4,112,114 . Sometimes, however, greater spread is instead achieved by building a critical mass of easily influenced people rather than by targeting a few central influencers [115][116][117] . ...
Article
Full-text available
Understanding the collective dynamics behind the success of ideas, products, behaviors, and social actors is critical for decision-making across diverse contexts, including hiring, funding, career choices, and the design of interventions for social change. Methodological advances and the increasing availability of big data now allow for a broader and deeper understanding of the key facets of success. Recent studies unveil regularities beneath the collective dynamics of success, pinpoint underlying mechanisms, and even enable predictions of success across diverse domains, including science, technology, business, and the arts. However, this research also uncovers troubling biases that challenge meritocratic views of success. This review synthesizes the growing, cross-disciplinary literature on the collective dynamics behind success and calls for further research on cultural influences, the origins of inequalities, the role of algorithms in perpetuating them, and experimental methods to further probe causal mechanisms behind success. Ultimately, these efforts may help to better align success with desired societal values.
... Up to now, researchers have introduced various centrality indexes of a node in the network system to quantify the importance of the nodes in different situations, among which degree centrality [26,27], closeness centrality [28,29], betweenness centrality [30], eccentricity centrality [31], eigenvector centrality [32][33][34], and so on are widely used. It is a pity that these indexes have various standards and are put forward according to specific application scenarios, so they cannot be generalized. ...
Article
Full-text available
In light of the fact that existing centrality indexes disregard the influence of dynamic characteristics and lack generalizability due to standard diversification, this study investigates dynamic survivability centrality, which enables quantification of oscillators’ capacity to impact the dynamic survivability of nonlinear oscillator systems. Taking an Erdős–Rényi random graph system consisting of Stuart–Landau oscillators as an illustrative example, the typical symmetry synchronization is considered as the key mission to be accomplished in light of the study and the dynamic survivability centrality value is found to be dependent on both the system size and connection density. Starting with a small scale system, the correctness of the theoretical results and the superiority in comparison to traditional indexes are verified. Further, we present the quantitative results by means of error analysis, distribution comparison of various indexes and relationship with system structure exploration, and give the position of the key oscillator. The results demonstrate a negligible error between the theoretical and numerical outcomes, and highlighting that the distribution of dynamic survivability centrality closely resembles the distribution of system state changes. The conclusions serve as evidence for the accuracy and validity of the proposed index. The findings provide an effective approach to protect systems to improve dynamic survivability.
... As a fundamental concept in social network analysis and complex networks, network centrality has received considerable attention from the scientific community [69]. The value of centrality metrics can be used to rank nodes or edges in networks [61], with an aim to identify important nodes or edges for different applications. In the past decades, a host of centrality measures were presented to describe and characterize the roles of nodes or edges in networks [2,3,8,15,17,56,83,91]. ...
Preprint
Full-text available
For random walks on graph G\mathcal{G} with n vertices and m edges, the mean hitting time HjH_j from a vertex chosen from the stationary distribution to vertex j measures the importance for j, while the Kemeny constant K\mathcal{K} is the mean hitting time from one vertex to another selected randomly according to the stationary distribution. In this paper, we first establish a connection between the two quantities, representing K\mathcal{K} in terms of HjH_j for all vertices. We then develop an efficient algorithm estimating HjH_j for all vertices and K\mathcal{K} in nearly linear time of m. Moreover, we extend the centrality HjH_j of a single vertex to H(S) of a vertex set S, and establish a link between H(S) and some other quantities. We further study the NP-hard problem of selecting a group S of knk\ll n vertices with minimum H(S), whose objective function is monotonic and supermodular. We finally propose two greedy algorithms approximately solving the problem. The former has an approximation factor (1kk11e)(1-\frac{k}{k-1}\frac{1}{e}) and O(kn3)O(kn^3) running time, while the latter returns a (1kk11eϵ)(1-\frac{k}{k-1}\frac{1}{e}-\epsilon)-approximation solution in nearly-linear time of m, for any parameter 0<ϵ<10<\epsilon <1. Extensive experiment results validate the performance of our algorithms.
... As there are many ways to vary the underlying assumptions about the structure of contact patterns, the objective function (i.e. how to measure the severity of a disease outbreak), the disease dynamics, and the information available to exploit these structures, these methods are becoming a very rich and diverse theory [12]. Typically, it is implicitly assumed that for vaccination or quarantine, the nodes of a network can be ranked with respect to the objective function: if n nodes are to be vaccinated or quarantined, the optimal choice is to always take the top n nodes of the ranking. ...
Preprint
Finding influential spreaders of information and disease in networks is an important theoretical problem, and one of considerable recent interest. It has been almost exclusively formulated as a node-ranking problem -- methods for identifying influential spreaders rank nodes according to how influential they are. In this work, we show that the ranking approach does not necessarily work: the set of most influential nodes depends on the number of nodes in the set. Therefore, the set of n most important nodes to vaccinate does not need to have any node in common with the set of n+1 most important nodes. We propose a method for quantifying the extent and impact of this phenomenon, and show that it is common in both empirical and model networks.
... For example, it can be utilized to identify the most influential person in an online social network 3 , the most crucial artery in transport congestion 4 , or the most important financial institution in the global economy 5 . Over 30 different centrality measures (e.g., degree centrality, betweenness centrality, closeness centrality, eigenvector centrality, and control centrality) have been examined in the literature [6][7][8][9] . Among these, eigenvector centrality, defined as the leading eigenvector of the adjacency matrix of a graph, has received increasing attention 10,11 . ...
Preprint
Centrality is widely recognized as one of the most critical measures to provide insight in the structure and function of complex networks. While various centrality measures have been proposed for single-layer networks, a general framework for studying centrality in multilayer networks (i.e., multicentrality) is still lacking. In this study, a tensor-based framework is introduced to study eigenvector multicentrality, which enables the quantification of the impact of interlayer influence on multicentrality, providing a systematic way to describe how multicentrality propagates across different layers. This framework can leverage prior knowledge about the interplay among layers to better characterize multicentrality for varying scenarios. Two interesting cases are presented to illustrate how to model multilayer influence by choosing appropriate functions of interlayer influence and design algorithms to calculate eigenvector multicentrality. This framework is applied to analyze several empirical multilayer networks, and the results corroborate that it can quantify the influence among layers and multicentrality of nodes effectively.
... By understanding the contact networks, we should thus be able to better predict and mitigate disease outbreaks. These are the premises of network epidemiology [1,2]-one of its most active questions being how to exploit the contact network in targeted vaccination campaigns [3,4]. Until now, targeted vaccination has mostly been a theoretical topic. ...
Preprint
We investigate methods to vaccinate contact networks -- i.e. removing nodes in such a way that disease spreading is hindered as much as possible -- with respect to their cost-efficiency. Any real implementation of such protocols would come with costs related both to the vaccination itself, and gathering of information about the network. Disregarding this, we argue, would lead to erroneous evaluation of vaccination protocols. We use the susceptible-infected-recovered model -- the generic model for diseases making patients immune upon recovery -- as our disease-spreading scenario, and analyze outbreaks on both empirical and model networks. For different relative costs, different protocols dominate. For high vaccination costs and low costs of gathering information, the so-called acquaintance vaccination is the most cost efficient. For other parameter values, protocols designed for query-efficient identification of the network's largest degrees are most efficient.
... There exist several widely used link ranking methods that can be potentially used in dynamic range maximization [31][32][33][34] . Here we introduce some heuristic methods that is applicable to large-scale networks. ...
Preprint
We study the strategy to optimally maximize the dynamic range of excitable networks by removing the minimal number of links. A network of excitable elements can distinguish a broad range of stimulus intensities and has its dynamic range maximized at criticality. In this study, we formulate the activation propagation in excitable networks as a message passing process in which the critical state is reached when the largest eigenvalue of the weighted non-backtracking (WNB) matrix is exactly one. By considering the impact of single link removal on the largest eigenvalue, we develop an efficient algorithm that aims to identify the optimal set of links whose removal will drive the system to the critical state. Comparisons with other competing heuristics on both synthetic and real-world networks indicate that the proposed method can maximize the dynamic range by removing the smallest number of links, and at the same time maintain the largest size of the giant connected component.
... Recently, the focus of network science has been shifting from revealing the macroscopic statistical regularities (e.g., scale-free [18], assortative mixing [19], small-world [20] and clustering [20]) to discovering the mecroscopic structural organization (communities [21,22] and motifs [23,24]), and then to distinguishing the roles played by individual nodes and links. In particular, the discovery of scale-free property implies the significance of identifying the influential nodes [25][26][27]. For example, vital disease-related genes can help diagnose the known diseases and understand the features of unknown diseases [5,7], essential spreaders assist us to better control the outbreak of epidemics [28][29][30], influential customers allow us to conduct a successful advertisements marketing with low cost [31,32]. ...
Preprint
Identifying influential nodes in networks is a significant and challenging task. Among many centrality indices, the k-shell index performs very well in finding out influential spreaders. However, the traditional method for calculating the k-shell indices of nodes needs the global topological information, which limits its applications in large-scale dynamically growing networks. Recently, L\@\"{u} \emph{et al.} [Nature Communications 7 (2016) 10168] proposed a novel asynchronous algorithm to calculate the k-shell indices, which is suitable to deal with large-scale growing networks. In this paper, we propose two algorithms to select nodes and update their intermediate values towards the k-shell indices, which can help in accelerating the convergence of the calculation of k-shell indices. The former algorithm takes into account the degrees of nodes while the latter algorithm prefers to choose the node whose neighbors' values have been changed recently. We test these two methods on four real networks and three artificial networks. The results suggest that the two algorithms can respectively reduce the convergence time up to 75.4\% and 92.9\% in average, compared with the original asynchronous updating algorithm.
... Understanding cascades is also important for optimizing viral marketing [15][16][17]. Yet, it is challenging to find the set of initiators (also called seeds) which, when put into a new state (opinion/idea/product), will maximize the spread of this state [18][19][20][21][22][23][24][25][26][27][28]. ...
Preprint
Influence Maximization is a NP-hard problem of selecting the optimal set of influencers in a network. Here, we propose two new approaches to influence maximization based on two very different metrics. The first metric, termed Balanced Index (BI), is fast to compute and assigns top values to two kinds of nodes: those with high resistance to adoption, and those with large out-degree. This is done by linearly combining three properties of a node: its degree, susceptibility to new opinions, and the impact its activation will have on its neighborhood. Controlling the weights between those three terms has a huge impact on performance. The second metric, termed Group Performance Index (GPI), measures performance of each node as an initiator when it is a part of randomly selected initiator set. In each such selection, the score assigned to each teammate is inversely proportional to the number of initiators causing the desired spread. These two metrics are applicable to various cascade models; here we test them on the Linear Threshold Model with fixed and known thresholds. Furthermore, we study the impact of network degree assortativity and threshold distribution on the cascade size for metrics including ours. The results demonstrate our two metrics deliver strong performance for influence maximization.
... One of the central questions in theoretical epidemiology [10,15,22] is to identify individuals that are important for an infection to spread [17,26]. What "important" means depends on particular scenarios-what kind of disease that spreads and what can be done about it. ...
Preprint
We investigate three aspects of the importance of nodes with respect to Susceptible-Infectious-Removed (SIR) disease dynamics: influence maximization (the expected outbreak size given a set of seed nodes), the effect of vaccination (how much deleting nodes would reduce the expected outbreak size) and sentinel surveillance (how early an outbreak could be detected with sensors at a set of nodes). We calculate the exact expressions of these quantities, as functions of the SIR parameters, for all connected graphs of three to seven nodes. We obtain the smallest graphs where the optimal node sets are not overlapping. We find that: node separation is more important than centrality for more than one active node, that vaccination and influence maximization are the most different aspects of importance, and that the three aspects are more similar when the infection rate is low.
... It is possible to study the structure of the network ( Figure 1) in terms of graph theory. Even though there are several ways to identify the most important nodes inside a network [161], the centrality degree provides some insight, if assumed that the network is undirected as a first approximation. This parameter is defined as the total number of nodes (alloys) connected to a particular one. ...
Article
Full-text available
A method is developed to exploit data on complex materials behaviors that are impossible to tackle by conventional machine learning tools. A pairwise comparison algorithm is used to assess a particular property among a group of different alloys tested simultaneously in identical conditions. Even though such characteristics can be evaluated differently across teams, if a series of the same alloys are analyzed among two or more studies, it is feasible to infer an overall ranking among materials. The obtained ranking is later fitted with respect to the alloy's composition by a Gaussian process. The predictive power of the method is demonstrated in the case of the resistance of metallic materials to molten salt corrosion and wear. In this case, the method is applied to the design of wear-resistant hard-facing alloys by also associating it with a combinatorial optimization of their composition by a multi-objective genetic algorithm. New alloys are selected and fabricated, and their experimental behavior is compared to that of concurrent materials. This generic method can therefore be applied to model other complex material properties-such as environmental resistance, contact properties, or processability-and to design alloys with improved performance.
Article
Full-text available
Granular mixtures with size differences can segregate when subjected to shaking or shear. This study investigates the mechanism underlying the inverse grading segregation of single coarse particles with varying sizes under cyclic shear. A self-developed two-dimensional testing device combined with three-dimensional printing technology and the image identification capabilities of the segment anything model enabled the construction of a cyclic shear numerical model based on rigid blocks. The analysis concentrated on the movement of coarse particles and the evolution of the macroscopic structure of the particle system, and the local topological structures surrounding single coarse particles. The findings reveal the following: (1) Larger coarse particle sizes and lower shape factors under cyclic shear result in shorter times to free surface and higher vertical velocities. (2) Throughout the cycles, the vertical net force acting on each coarse particle fluctuates around zero, while its vertical position displays a zigzag upward trend. (3) Within a single typical cycle, larger coarse particles increase the local void ratio, aiding their lift. Vertical displacement and net force exhibit a double peak pattern inversely related to coordination number, while horizontal displacement fluctuates periodically around zero. (4) Weighted local degree centrality negatively correlates with vertical displacement of single coarse particles, reflecting the dual influence of particle size and importance on segregation velocity. Fine particles occupying the two lower corners of single coarse particles create the lifting effect, driving their zigzag upward motion. Additionally, larger coarse particles enhance their importance, accelerating the segregation process.
Article
Stochastic noise is prevalent in real world and play a significant role in fields such as financial, telecommunications, and probability theory. Moreover, in the field of network science, this type of random noise will spread throughout the entire network according to its topology, exerting crucial and sometimes determinative effects on various properties of networks or dynamical systems. In this paper, we propose an alternative framework to traditional approaches used in complex network(such as ODES and PDEs) by employing SDEs(stochastic differential dynamics) and the Ito's formula to investigate the impact of stochastic processes. Through this framework, we focused on exploring the effect of stochastic noise on signal propagation and dimension reduction. Interestingly, our theoretical and simulation results demonstrate that stochastic noise significantly impact both propagation time and reduced systems, showing exponential differences in their expected values. Furthermore, the framework we have developed reveals the fundamental pattern of how stochastic noise influences dynamical properties, and it introduces a basic analytical method in analyzing stochastic noise. Importantly, this framework can be directly applied to other domains of complex network, such as dynamical control and network stability.
Article
Influential node identification has long been a focal point for researchers. Existing methods primarily focus on the individual topological characteristics of the nodes, making it difficult to accurately identify key nodes within a network. This paper introduces an improved local gravity model (ILGM) that incorporates node position, paths, quantity and injection to evaluate the influence of each node. The ILGM further explores the topological characteristics of neighbouring nodes, incorporating path and quantity data from adjacent nodes. This enhancement significantly improves the accuracy of the algorithm’s results. Empirical evaluations conducted on five real-world networks and one artificial network demonstrate that the proposed model effectively identifies influential nodes in complex networks.
Article
Full-text available
Identifying influential nodes in real networks is significant in studying and analyzing the structural as well as functional aspects of networks. VoteRank is a simple and effective algorithm to identify high-spreading nodes. The accuracy and monotonicity of the VoteRank algorithm are poor as the network topology fails to be taken into account.Given the nodes’ attributes and neighborhood structure, this paper put forward an algorithm based on the Edge Weighted VoteRank (EWV) for identifying influential nodes in the network. The proposed algorithm draws inspiration from human voting behavior and expresses the attractiveness of nodes to their first-order neighborhood using the weights of connecting edges. Similarity between nodes is introduced into the voting process, further enhancing the accuracy of the method. Additionally, this EWV algorithm addresses the problem of influential node clustering by reducing the voting ability of nodes in the second-order neighborhood of the most influential nodes. The validity of the presented algorithm is verified through experiments conducted on 12 different real networks of various sizes and structures, directly comparing it with 7 competing algorithms.Empirical results indicate a superiority of the presented algorithm over the remaining seven competing algorithms with respect to node differentiation ability, effectiveness, and ranked list accuracy.
Article
Purpose This study aims to develop an analytical model for generating relational rent within network organizations and to establish a comprehensive framework for the allocation of such rent. Design/methodology/approach The design stage involves the formulation of integrated computer-aided manufacturing definition (IDEF0) methodologies. The construction stage comprises the detailed elaboration of three distinct stages for rent allocation methods. Findings The “relational rent” perspective has illustrated that firms create value and distribute rent within network organizations by identifying partners with complementary resources, establishing high levels of robust informal trust, sharing knowledge and making customized investments tailored to their partners’ needs. Practical implications This innovative approach, for the first time, sheds light on the path for managers to secure the stability of network organizations by implementing multiple iterations of benefit distribution. However, it remains an area lacking standardized guidelines for decision-makers. Essentially, our paper pioneers the endeavor, marking the inaugural step toward ensuring network organization stability through profit distribution decisions. Additionally, it constitutes the initial attempt to bridge the gap between qualitative analysis and a quantitative profit distribution framework. Originality/value This rent allocation method unequivocally highlights the importance of efficient allocation within network organizations, emphasizing the streamlining of the allocation process and thus substantiating the rationality of the proposed method.
Preprint
Information flow, opinion, and epidemics spread over structured networks. When using individual node centrality indicators to predict which nodes will be among the top influencers or spreaders in a large network, no single centrality has consistently good ranking power. We show that statistical classifiers using two or more centralities as input are instead consistently predictive over many diverse, static real-world topologies. Certain pairs of centralities cooperate particularly well in statistically drawing the boundary between the top spreaders and the rest: local centralities measuring the size of a node's neighbourhood benefit from the addition of a global centrality such as the eigenvector centrality, closeness, or the core number. This is, intuitively, because a local centrality may rank highly some nodes which are located in dense, but peripheral regions of the network---a situation in which an additional global centrality indicator can help by prioritising nodes located more centrally. The nodes selected as superspreaders will usually jointly maximise the values of both centralities. As a result of the interplay between centrality indicators, training classifiers with seven classical indicators leads to a nearly maximum average precision function (0.995) across the networks in this study.
Article
Background: Current experimental practices typically produce large multidimensional datasets. Distance matrix calculation between elements (e.g., samples) for such data, although being often necessary in preprocessing for statistical inference or visualization, can be computationally demanding. Data sparsity, which is often observed in various experimental data modalities, such as single-cell sequencing in bioinformatics or collaborative filtering in recommendation systems, may pose additional algorithmic challenges. Results: We present GPU-Assisted Distance Estimation Software (GADES), a graphical processing unit (GPU)-enhanced package that allows for massively paralleled Kendall-τ\tau distance matrices computation. The package's architecture involves specific memory management, which lifts the limits for the data size imposed by GPU memory capacity. Additional algorithmic solutions provide a means to address the data sparsity problem and reinforce the acceleration effect for sparse datasets. Benchmarking against available central processing unit-based packages on simulated and real experimental single-cell RNA sequencing or single-cell ATAC sequencing datasets demonstrated significantly higher speed for GADES compared to other methods for both sparse and dense data processing, with additional performance boost for the sparse data. Conclusions: This work significantly contributes to the development of computational strategies for high-performance Kendall distance matrices computation and allows for the efficient processing of Big Data with the power of GPU. GADES is freely available at https://github.com/lab-medvedeva/GADES-main.
Preprint
Two very important problems regarding spreading phenomena in complex topologies are the optimal selection of node sets either to minimize or maximize the extent of outbreaks. Both problems are nontrivial when a small fraction of the nodes in the network can be used to achieve the desired goal. The minimization problem is equivalent to a structural optimization. The "superblockers", i.e., the nodes that should be removed from the network to minimize the size of outbreaks, are those nodes that make connected components as small as possible. "Superspreaders" are instead the nodes such that, if chosen as initiators, they maximize the average size of outbreaks. The identity of superspreaders is expected to depend not just on the topology, but also on the specific dynamics considered. Recently, it has been conjectured that the two optimization problems might be equivalent, in the sense that superblockers act also as superspreaders. In spite of its potential groundbreaking importance, no empirical study has been performed to validate this conjecture. In this paper, we perform an extensive analysis over a large set of real-world networks to test the similarity between sets of superblockers and of superspreaders. We show that the two optimization problems are not equivalent: superblockers do not act as optimal spreaders.
Article
Mining key nodes in multilayer networks is a topic of considerable importance and widespread interest. This task is crucial for understanding and optimizing complex networks, with far-reaching applications in fields such as social network analysis and biological systems modeling. This paper proposes an effective and efficient fuzzy weighted information model (FWI) to analyze the influential nodes in multilayer networks. In this model, a Joules law model is defined for quantifying the information of the nodes in each layer of the multilayer network. Moreover, the information of the nodes between each layer is then measured by the Jensen–Shannon divergence. The influential nodes in the multilayer network are analyzed using the FWI model to aggregate the information within and between layers. Validation on real-world networks and comparison with other methods demonstrate that FWI is effective and offers better differentiation than existing methods in identifying key nodes in multilayer networks.
Preprint
This work compares several node (and network) criticality measures quantifying to which extend each node is critical with respect to the communication flow between nodes of the network, and introduces a new measure based on the Bag-of-Paths (BoP) framework. Network disconnection simulation experiments show that the new BoP measure outperforms all the other measures on a sample of Erdos-Renyi and Albert-Barabasi graphs. Furthermore, a faster (still O(n^3)), approximate, BoP criticality relying on the Sherman-Morrison rank-one update of a matrix is introduced for tackling larger networks. This approximate measure shows similar performances as the original, exact, one.
Preprint
Finding the set of nodes, which removed or (de)activated can stop the spread of (dis)information, contain an epidemic or disrupt the functioning of a corrupt/criminal organization is still one of the key challenges in network science. In this paper, we introduce the generalized network dismantling problem, which aims to find the set of nodes that, when removed from a network, results in a network fragmentation into subcritical network components at minimum cost. For unit costs, our formulation becomes equivalent to the standard network dismantling problem. Our non-unit cost generalization allows for the inclusion of topological cost functions related to node centrality and non-topological features such as the price, protection level or even social value of a node. In order to solve this optimization problem, we propose a method, which is based on the spectral properties of a novel node-weighted Laplacian operator. The proposed method is applicable to large-scale networks with millions of nodes. It outperforms current state-of-the-art methods and opens new directions in understanding the vulnerability and robustness of complex systems.
Preprint
Link prediction is an elemental challenge in network science, which has already found applications in guiding laboratorial experiments, digging out drug targets, recommending friends in social networks, probing mechanisms in network evolution, and so on. With a simple assumption that the likelihood of the existence of a link between two nodes can be unfolded by a linear summation of neighboring nodes' contributions, we obtain the analytical solution of the optimal likelihood matrix, which shows remarkably better performance in predicting missing links than the state-of-the-art algorithms for not only simple networks, but also weighted and directed networks. To our surprise, even some degenerated local similarity indices from the solution outperform well-known local indices, which largely refines our knowledge, for example, the direct count of the number of 3-hop paths between two nodes more accurately predicts missing links than the number of 2-hop paths (i.e., the number of common neighbors), while in the previous studies, as indicated by the local path index and Katz index, the statistics on longer paths are always considered to be complementary to but less important than those on shorter paths.
Preprint
The robustness of complex networks under targeted attacks is deeply connected to the resilience of complex systems, i.e., the ability to make appropriate responses to the attacks. In this article, we investigated the state-of-the-art targeted node attack algorithms and demonstrate that they become very inefficient when the cost of the attack is taken into consideration. In this paper, we made explicit assumption that the cost of removing a node is proportional to the number of adjacent links that are removed, i.e., higher degree nodes have higher cost. Finally, for the case when it is possible to attack links, we propose a simple and efficient edge removal strategy named Hierarchical Power Iterative Normalized cut (HPI-Ncut).The results on real and artificial networks show that the HPI-Ncut algorithm outperforms all the node removal and link removal attack algorithms when the cost of the attack is taken into consideration. In addition, we show that on sparse networks, the complexity of this hierarchical power iteration edge removal algorithm is only O(nlog2+ϵ(n))O(n\log^{2+\epsilon}(n)).
Article
Full-text available
Transportation systems are vulnerable to disruptive events such as natural disasters, industrial accidents, terrorist attacks, and climate change. Vulnerability assessment is necessary to understand the impacts of these disruptive events, identify underlying deficiencies within the network, and improve transportation system resilience. Multi-modal transportation networks are often interdependent and form a “system of systems”, which creates a susceptibility to cascading indirect impacts within the integrated transportation network. Accurately modeling these interdependencies typically requires a large amount of data, such as traffic flow and travel demand information for transportation systems, which may not be available or accessible. This paper proposes a network topology-based framework to conduct an interdependent transportation network vulnerability analysis by introducing an algorithm to simulate cascading failures across transport systems. The proposed framework estimates the vulnerability of the network with respect to a specific hazard, combining the network topology and the functional attributes of the transportation infrastructure components. A case study with real-world data is conducted to demonstrate the applicability of the framework to the Houston freight transportation network, and to understand the network performance under different scenarios. This study presents an alternate method that stakeholders may use to assess interdependent transportation network vulnerability when more detailed flow-based data is not available.
Article
Full-text available
Pandemics like COVID-19 have a huge impact on human society and the global economy. Vaccines are effective in the fight against these pandemics but often in limited supplies, particularly in the early stages. Thus, it is imperative to distribute such crucial public goods efficiently. Identifying and vaccinating key spreaders (i.e., influential nodes) is an effective approach to break down the virus transmission network, thereby inhibiting the spread of the virus. Previous methods for identifying influential nodes in networks lack consistency in terms of effectiveness and precision. Their applicability also depends on the unique characteristics of each network. Furthermore, most of them rank nodes by their individual influence in the network without considering mutual effects among them. However, in many practical settings like vaccine distribution, the challenge is how to select a group of influential nodes. This task is more complex due to the interactions and collective influence of these nodes together. This paper introduces a new framework integrating Graph Neural Network (GNN) and Deep Reinforcement Learning (DRL) for vaccination distribution. This approach combines network structural learning with strategic decision-making. It aims to efficiently disrupt the network structure and stop disease spread through targeting and removing influential nodes. This method is particularly effective in complex environments, where traditional strategies might not be efficient or scalable. Its effectiveness is tested across various network types including both synthetic and real-world datasets, demonstrting a potential for real-world applications in fields like epidemiology and cybersecurity. This interdisciplinary approach shows the capabilities of deep learning in understanding and manipulating complex network systems.
Article
Full-text available
Most of the transportation networks are not determined by their topology only but also by the traffic taking place on the links. It is therefore crucial to characterize the traffic and its possible correlations with the network’s topology. We first define and introduce some of the tools which allows for the characterization of the traffic and the structure of a weighted network. We illustrate these measures on the example of the airport network in which the nodes are airports and where links represent direct connections. The weight on each link is then given by the number of passengers. The main results are the following: (i) the traffic is very heterogeneous and is distributed according to a broad law; (ii) the number of passengers per connection is not constant and increases with the number of connections of an airport, which implies that traffic and topology are not independent. More generally, these measures show that the traffic cannot in general be ignored and that the modeling of transportation networks has to integrate simultaneously both the topology and the weights. We thus propose a model which allows to explain some of the features observed in real-world networks. The main ingredient in this weighted network growth model is a dynamical coupling between weights and links: every time a new link enters the system, the traffic is perturbed. We show that this simple ingredient allows to understand the structure of some real-world weighted networks as well as the interplay between traffic flows and the network’s architecture. La plupart des réseaux de transport ne sont pas uniquement déterminés par leur topologie mais aussi par le trafic sur les liens. Il est donc important de caractériser ce trafic ainsi que ses corrélations éventuelles avec la topologie. Dans un premier temps, nous définissons les outils qui permettent la caracté­risation du trafic et de la structure du réseau. Nous illustrons ces mesures sur l'exemple du réseau du transport aérien pour lequel les nœuds sont les aéroports et les liens les lignes directes. Le poids de chaque lien est alors donné par le nombre de passagers. Les résultats essentiels sont les suivants : (i) le trafic est très hétérogène car distribué selon une loi large; (ii) le trafic et la topologie ne sont pas indépendants, le nombre de passagers par connexion n'étant pas constant et augmentant avec le nombre de connexions d'un aéroport. Plus généralement, ces mesures démontrent que le trafic ne peut pas être ignoré et que la modélisation des réseaux de transports doit intégrer simultanément la topologie et les poids des liens. Nous décrivons donc un modèle qui permet d'expliquer certaines caractéristiques observées dans des cas réels. Ce modèle de formation de réseaux valués repose sur l'idée d'un couplage dynamique entre les poids et les liens : dès qu'un nouveau lien entre dans le système, le trafic est naturellement perturbé. Nous montrons alors que ce simple ingrédient permet de comprendre la structure de certains réseaux ainsi que l'articulation entre le flot du trafic et l'architecture du réseau.
Article
Full-text available
This paper develops an analytical model of contagion in financial networks with arbitrary structure. We explore how the probability and potential impact of contagion is influenced by aggregate and idiosyncratic shocks, changes in network structure and asset market liquidity. Our findings suggest that financial systems exhibit a robust-yet-fragile tendency: while the probability of contagion may be low, the effects can be extremely widespread when problems occur. And we suggest why the resilience of the system in withstanding fairly large shocks prior to 2007 should not have been taken as a reliable guide to its future robustness.
Article
Full-text available
A number of centrality measures are available to determine the relative importance of a node in a complex network, and betweenness is prominent among them. However, the existing centrality measures are not adequate in network percolation scenarios (such as during infection transmission in a social network of individuals, spreading of computer viruses on computer networks, or transmission of disease over a network of towns) because they do not account for the changing percolation states of individual nodes. We propose a new measure, percolation centrality, that quantifies relative impact of nodes based on their topological connectivity, as well as their percolation states. The measure can be extended to include random walk based definitions, and its computational complexity is shown to be of the same order as that of betweenness centrality. We demonstrate the usage of percolation centrality by applying it to a canonical network as well as simulated and real world scale-free and random networks.
Article
Full-text available
Scientific Reports 6 : Article number: 27823; 10.1038/srep27823 published online: 14 June 2016 ; updated: 25 August 2016 This Article contains errors in the Acknowledgements section.
Article
Full-text available
Identifying a set of influential spreaders in complex networks plays a crucial role in effective information spreading. A simple strategy is to choose top-r ranked nodes as spreaders according to influence ranking method such as PageRank, ClusterRank and k-shell decomposition. Besides, some heuristic methods such as hill-climbing, SPIN, degree discount and independent set based are also proposed. However, these approaches suffer from a possibility that some spreaders are so close together that they overlap sphere of influence or time consuming. In this report, we present a simply yet effectively iterative method named VoteRank to identify a set of decentralized spreaders with the best spreading ability. In this approach, all nodes vote in a spreader in each turn, and the voting ability of neighbors of elected spreader will be decreased in subsequent turn. Experimental results on four real networks show that under Susceptible-Infected-Recovered (SIR) model, VoteRank outperforms the traditional benchmark methods on both spreading speed and final affected scale. What's more, VoteRank is also superior to other group-spreader identifying methods on computational time.
Article
Full-text available
We elaborate on a linear time implementation of the Collective Influence (CI) algorithm introduced by Morone, Makse, Nature 524, 65 (2015) to find the minimal set of influencers in a network via optimal percolation. We show that the computational complexity of CI is O(N log N) when removing nodes one-by-one, with N the number of nodes. This is made possible by using an appropriate data structure to process the CI values, and by the finite radius l of the CI sphere. Furthermore, we introduce a simple extension of CI when l is infinite, the CI propagation (CI_P) algorithm, that considers the global optimization of influence via message passing in the whole network and identifies a slightly smaller fraction of influencers than CI. Remarkably, CI_P is able to reproduce the exact analytical optimal percolation threshold obtained by Bau, Wormald, Random Struct. Alg. 21, 397 (2002) for cubic random regular graphs, leaving little improvement left for random graphs. We also introduce the Collective Immunization Belief Propagation algorithm (CI_BP), a belief-propagation (BP) variant of CI based on optimal immunization, which has the same performance as CI_P. However, this small augmented performance of the order of 1-2 % in the low influencers tail comes at the expense of increasing the computational complexity from O(N log N) to O(N^2 log N), rendering both, CI_P and CI_BP, prohibitive for finding influencers in modern-day big-data. The same nonlinear running time drawback pertains to a recently introduced BP-decimation (BPD) algorithm by Mugisha, Zhou, arXiv:1603.05781. For instance, we show that for big-data social networks of typically 200 million users (eg, active Twitter users sending 500 million tweets per day), CI finds the influencers in less than 3 hours running on a single CPU, while the BP algorithms (CI_P, CI_BP and BDP) would take more than 3,000 years to accomplish the same task.
Article
Full-text available
Recently, the abundance of digital data is enabling the implementation of graph-based ranking algorithms that provide system level analysis for ranking publications and authors. Here, we take advantage of the entire Physical Review publication archive (1893-2006) to construct authors' networks where weighted edges, as measured from opportunely normalized citation counts, define a proxy for the mechanism of scientific credit transfer. On this network, we define a ranking method based on a diffusion algorithm that mimics the spreading of scientific credits on the network. We compare the results obtained with our algorithm with those obtained by local measures such as the citation count and provide a statistical analysis of the assignment of major career awards in the area of physics. A website where the algorithm is made available to perform customized rank analysis can be found at the address http://www.physauthorsrank.org.
Article
Full-text available
The study of network disintegration has attracted much attention due to its wide applications, including suppressing the epidemic spreading, destabilizing terrorist network, preventing financial contagion, controlling the rumor diffusion and perturbing cancer networks. The crux of this matter is to find the critical nodes whose removal will lead to network collapse. This paper studies the disintegration of networks with incomplete link information. An effective method is proposed to find the critical nodes by the assistance of link prediction techniques. Extensive experiments in both synthetic and real networks suggest that, by using link prediction method to recover partial missing links in advance, the method can largely improve the network disintegration performance. Besides, to our surprise, we find that when the size of missing information is relatively small, our method even outperforms than the results based on complete information. We refer to this phenomenon as the “comic effect” of link prediction, which means that the network is reshaped through the addition of some links that identified by link prediction algorithms, and the reshaped network is like an exaggerated but characteristic comic of the original one, where the important parts are emphasized.
Article
Full-text available
Complex networks with inhomogeneous topology are very fragile to intentional attacks on the "hub nodes". It is very important and desirable to evaluate the node importance and find these "hub nodes". The networks agglomeration is defined firstly. A node contraction method of evaluation of node importance in complex networks is proposed based on a new evaluation criterion, i. e. the most important node is the one whose contraction results in the largest increase of the networks agglomeration. With the node contraction method, both degree and position of a node are considered and the disadvantage of node deletion method is avoided. An algorithm whose time complexity is O(n3) is proposed. Final experiments verify its efficiency.
Article
Full-text available
Identifying influential nodes in dynamical processes is crucial in understanding network structure and function. Degree, H-index and coreness are widely used metrics, but previously treated as unrelated. Here we show their relation by constructing an operator, in terms of which degree, H-index and coreness are the initial, intermediate and steady states of the sequences, respectively. We obtain a family of H-indices that can be used to measure a node's importance. We also prove that the convergence to coreness can be guaranteed even under an asynchronous updating process, allowing a decentralized local method of calculating a node's coreness in large-scale evolving networks. Numerical analyses of the susceptible-infected-removed spreading dynamics on disparate real networks suggest that the H-index is a good tradeoff that in many cases can better quantify node influence than either degree or coreness.
Article
Full-text available
PLAD (plasma doping) is promising for both evolutionary and revolutionary doping options because of its unique advantages which can overcome or minimize many of the issues of the beam-line (BL) based implants. In this talk, I present developments of PLAD on both planar and non-planar 3D device structures. Comparing with the conventional BL implants, PLAD shows not only a significant production enhancement, but also a significant device performance improvement and 3D structure doping capability, including an 80% contact resistance reduction, more than 25% drive current increase on planar devices, and 23% series resistance reduction, 25% drive current increase on non-planar 3D devices.
Article
Full-text available
In complex networks, it is of great theoretical and practical significance to identify a set of critical spreaders which help to control the spreading process. Some classic methods are proposed to identify multiple spreaders. However, they sometimes have limitations for the networks with community structure because many chosen spreaders may be clustered in a community. In this paper, we suggest a novel method to identify multiple spreaders from communities in a balanced way. The network is first divided into a great many super nodes and then k spreaders are selected from these super nodes. Experimental results on real and synthetic networks with community structure show that our method outperforms the classic methods for degree centrality, k-core and ClusterRank in most cases.
Article
Full-text available
Similarity is a fundamental measure in network analyses and machine learning algorithms, with wide applications ranging from personalized recommendation to socio-economic dynamics. We argue that an effective similarity measurement should guarantee the stability even under some information loss. With six bipartite networks, we investigate the stabilities of fifteen similarity measurements by comparing the similarity matrixes of two data samples which are randomly divided from original data sets. Results show that, the fifteen measurements can be well classified into three clusters according to their stabilities, and measurements in the same cluster have similar mathematical definitions. In addition, we develop a top-n-stability method for personalized recommendation, and find that the unstable similarities would recommend false information to users, and the performance of recommendation would be largely improved by using stable similarity measurements. This work provides a novel dimension to analyze and evaluate similarity measurements, which can further find applications in link prediction, personalized recommendation, clustering algorithms, community detection and so on.
Article
Full-text available
Background: Computational approaches aided by computer science have been used to predict essential proteins and are faster than expensive, time-consuming, laborious experimental approaches. However, the performance of such approaches is still poor, making practical applications of computational approaches difficult in some fields. Hence, the development of more suitable and efficient computing methods is necessary for identification of essential proteins. Method: In this paper, we propose a new method for predicting essential proteins in a protein interaction network, local interaction density combined with protein complexes (LIDC), based on statistical analyses of essential proteins and protein complexes. First, we introduce a new local topological centrality, local interaction density (LID), of the yeast PPI network; second, we discuss a new integration strategy for multiple bioinformatics. The LIDC method was then developed through a combination of LID and protein complex information based on our new integration strategy. The purpose of LIDC is discovery of important features of essential proteins with their neighbors in real protein complexes, thereby improving the efficiency of identification. Results: Experimental results based on three different PPI(protein-protein interaction) networks of Saccharomyces cerevisiae and Escherichia coli showed that LIDC outperformed classical topological centrality measures and some recent combinational methods. Moreover, when predicting MIPS datasets, the better improvement of performance obtained by LIDC is over all nine reference methods (i.e., DC, BC, NC, LID, PeC, CoEWC, WDC, ION, and UC). Conclusions: LIDC is more effective for the prediction of essential proteins than other recently developed methods.
Article
Full-text available
Social networks constitute a new platform for information propagation, but its success is crucially dependent on the choice of spreaders who initiate the spreading of information. In this paper, we remove edges in a network at random and the network segments into isolated clusters. The most important nodes in each cluster then form a group of influential spreaders, such that news propagating from them would lead to an extensive coverage and minimal redundancy. The method well utilizes the similarities between the pre-percolated state and the coverage of information propagation in each social cluster to obtain a set of distributed and coordinated spreaders. Our tests on the Facebook networks show that this method outperforms conventional methods based on centrality. The suggested way of identifying influential spreaders thus sheds light on a new paradigm of information propagation on social networks.
Article
Full-text available
Most centralities proposed for identifying influential spreaders on social networks to either spread a message or to stop an epidemic require the full topological information of the network on which spreading occurs. In practice, however, collecting all connections between agents in social networks can be hardly achieved. As a result, such metrics could be difficult to apply to real social networks. Consequently, a new approach for identifying influential people without the explicit network information is demanded in order to provide an efficient immunization or spreading strategy, in a practical sense. In this study, we seek a possible way for finding influential spreaders by using the social mechanisms of how social connections are formed in real networks. We find that a reliable immunization scheme can be achieved by asking people how they interact with each other. From these surveys we find that the probabilistic tendency to connect to a hub has the strongest predictive power for influential spreaders among tested social mechanisms. Our observation also suggests that people who connect different communities is more likely to be an influential spreader when a network has a strong modular structure. Our finding implies that not only the effect of network location but also the behavior of individuals is important to design optimal immunization or spreading schemes.
Article
Full-text available
The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network [1]; or, if immunized, would prevent the diffusion of a large scale epidemic [2,3]. Localizing this optimal, i.e. minimal, set of structural nodes, called influencers, is one of the most important problems in network science [4,5]. Despite the vast use of heuristic strategies to identify influential spreaders [6-14], the problem remains unsolved. Here, we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix [15] of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly-connected nodes emerges among the optimal influencers. These are topologically tagged as low-degree nodes surrounded by hierarchical coronas of hubs, and are uncovered only through the optimal collective interplay of all the influencers in the network. Eventually, the present theoretical framework may hold a larger degree of universality, being applicable to other hard optimization problems exhibiting a continuous transition from a known phase [16].
Article
Full-text available
Recent study shows that the accuracy of the k-shell method in determining node coreness in a spreading process is largely impacted due to the existence of core-like group, which has a large k-shell index but a low spreading efficiency. Based on analysis of the structure of core-like groups in real-world networks, we discover that nodes in the core-like group are mutually densely connected with very few out-leaving links from the group. By defining a measure of diffusion importance for each edge based on the number of out-leaving links of its both ends, we are able to identify redundant links in the spreading process, which have a relatively low diffusion importance but lead to form the locally densely connected core-like group. After filtering out the redundant links and applying the k-shell method to the residual network, we obtain a renewed coreness for each node which is a more accurate index to indicate its location importance and spreading influence in the original network. Moreover, we find that the performance of the ranking algorithms based on the renewed coreness are also greatly enhanced. Our findings help to more accurately decompose the network core structure and identify influential nodes in spreading processes.
Article
Full-text available
Identifying the most influential spreaders is an important issue in understanding and controlling spreading processes on complex networks. Recent studies showed that nodes located in the core of a network as identified by the k-shell decomposition are the most influential spreaders. However, through a great deal of numerical simulations, we observe that not in all real networks do nodes in high shells are very influential: in some networks the core nodes are the most influential which we call true core, while in others nodes in high shells, even the innermost core, are not good spreaders which we call core-like group. By analyzing the k-core structure of the networks, we find that the true core of a network links diversely to the shells of the network, while the core-like group links very locally within the group. For nodes in the core-like group, the k-shell index cannot reflect their location importance in the network. We further introduce a measure based on the link diversity of shells to effectively distinguish the true core and core-like group, and identify core-like groups throughout the networks. Our findings help to better understand the structural features of real networks and influential nodes.
Article
Nodes in real-world networks organize into densely linked communities where edges appear with high concentration among the members of the community. Identifying such communities of nodes has proven to be a challenging task due to a plethora of definitions of network communities, intractability of methods for detecting them, and the issues with evaluation which stem from the lack of a reliable gold-standard ground-truth. In this paper, we distinguish between structural and functional definitions of network communities. Structural definitions of communities are based on connectivity patterns, like the density of connections between the community members, while functional definitions are based on (often unobserved) common function or role of the community members in the network. We argue that the goal of network community detection is to extract functional communities based on the connectivity structure of the nodes in the network. We then identify networks with explicitly labeled functional communities to which we refer as ground-truth communities. In particular, we study a set of 230 large real-world social, collaboration, and information networks where nodes explicitly state their community memberships. For example, in social networks, nodes explicitly join various interest-based social groups. We use such social groups to define a reliable and robust notion of ground-truth communities. We then propose a methodology, which allows us to compare and quantitatively evaluate how different structural definitions of communities correspond to ground-truth functional communities. We study 13 commonly used structural definitions of communities and examine their sensitivity, robustness and performance in identifying the ground-truth. We show that the 13 structural definitions are heavily correlated and naturally group into four classes. We find that two of these definitions, Conductance and Triad participation ratio, consistently give the best performance in identifying ground-truth communities. We also investigate a task of detecting communities given a single seed node. We extend the local spectral clustering algorithm into a heuristic parameter-free community detection method that easily scales to networks with more than 100 million nodes. The proposed method achieves 30 % relative improvement over current local clustering methods.
Book
Complex networks such as the Internet, WWW, transportation networks, power grids, biological neural networks, and scientific cooperation networks of all kinds provide challenges for future technological development. The first systematic presentation of dynamical evolving networks, with many up-to-date applications and homework projects to enhance study. The authors are all very active and well-known in the rapidly evolving field of complex networks. Complex networks are becoming an increasingly important area of research. Presented in a logical, constructive style, from basic through to complex, examining algorithms, through to construct networks and research challenges of the future.
Article
Matrix and tensor completion aim to recover a low-rank matrix / tensor from limited observations and have been commonly used in applications such as recommender systems and multi-relational data mining. A state-of-the-art matrix completion algorithm is Soft-Impute, which exploits the special "sparse plus low-rank" structure of the matrix iterates to allow efficient SVD in each iteration. Though Soft-Impute is a proximal algorithm, it is generally believed that acceleration destroys the special structure and is thus not useful. In this paper, we show that Soft-Impute can indeed be accelerated without comprising this structure. To further reduce the iteration time complexity, we propose an approximate singular value thresholding scheme based on the power method. Theoretical analysis shows that the proposed algorithm still enjoys the fast O(1/T2)O(1/T^2) convergence rate of accelerated proximal algorithms. We further extend the proposed algorithm to tensor completion with the scaled latent nuclear norm regularizer. We show that a similar "sparse plus low-rank" structure also exists, leading to low iteration complexity and fast O(1/T2)O(1/T^2) convergence rate. Extensive experiments demonstrate that the proposed algorithm is much faster than Soft-Impute and other state-of-the-art matrix and tensor completion algorithms.
Article
In this paper, we propose a network performance/efficiency measure for the evaluation of financial networks with intermediation. The measure captures risk, transaction cost, price, transaction flow, revenue, and demand information in the context of the decision-makers' behavior in multitiered financial networks that also allow for electronic transactions. The measure is then utilized to define the importance of a financial network component, that is, a node or a link, or a combination of nodes and links. Numerical examples are provided in which the efficiency of the financial network is computed along with the importance ranking of the nodes and links. The results in this paper can be used to assess which nodes and links in financial networks are the most vulnerable in the sense that their removal will impact the efficiency of the network in the most significant way. Hence, the results in this paper have relevance to national security as well as implications for the insurance industry.
Article
The book that launched the Dempster–Shafer theory of belief functions appeared 40 years ago. This intellectual autobiography looks back on how I came to write the book and how its ideas played out in my later work.
Article
The intuitive background for measures of structural centrality in social networks is reviewed and existing measures are evaluated in terms of their consistency with intuitions and their interpretability.
Article
We implement a novel method to detect systemically important financial institutions in a network. The method consists in a simple model of distress and losses redistribution derived from the interaction of banks' balance-sheets through bilateral exposures. The algorithm goes beyond the traditional default-cascade mechanism, according to which contagion propagates only through banks that actually default. We argue that even in the absence of other defaults, distressed-but-non-defaulting institutions transmit the contagion through channels other than solvency: weakness in their balance sheet reduces the value of their liabilities, thereby negatively affecting their interbank lenders even before a credit event occurs. In this paper, we apply the methodology to a unique dataset covering bilateral exposures among all Italian banks in the period 2008-2012. We find that the systemic impact of individual banks has decreased over time since 2008. The result can be traced back to decreasing volumes in the interbank market and to an intense recapitalization process. We show that the marginal effect of a bank's capital on its contribution to systemic risk in the network is considerably larger when interconnectedness is high (good times): this finding supports the regulatory work on counter-cyclical (macroprudential) capital buffers.
Article
The structure characters of weighted complex networks are analysed. The effect of the edge-weight on estimation of node importance is calculated. A new definition of weighted node importance is proposed, and an improved node contraction method in weighted networks is given based on the evaluation criterion, i.e. the most important node is the one whose contraction results are the largest increase of the weighted networks agglomeration. The time complexity of this algorithm is O(n 3), and the improved evaluation method can help exactly to find some critical nodes in complex networks. Final experiments verify the efficiency and feasibility of the proposed method.
Article
In order to quantitatively calculate the invulnerability of the communication network, taking fully connected network as a reference, an evaluation method based on disjoint paths in topology is proposed to define the index of the invulnerability and the vitality of node and link. Meanwhile, a method for calculating the disjoint paths is proposed. The index of the invulnerability is obtained by calculating the ratio of the disjomt paths of the nodes for both target network and fully connected network. Furthermore, according to the size of the value of the invulnerability index in condition of node or link failure, the importance of node and link is evaluated. The correctness and the time and space complexity of the proposed method are discussed. By giving an example and comparing with the evaluation method based on the shortest paths, it is indicated that the proposed method is more reasonable and is better for reflecting the actual communication network performance.
Article
Online social networks became a remarkable development with wonderful social as well as economic impact within the last decade. Currently the most famous online social network, Facebook, counts more than one billion monthly active users across the globe. Therefore, online social networks attract a great deal of attention among practitioners as well as research communities. Taken together with the huge value of information that online social networks hold, numerous online social networks have been consequently valued at billions of dollars. Hence, a combination of this technical and social phenomenon has evolved worldwide with increasing socioeconomic impact. Online social networks can play important role in viral marketing techniques, due to their power in increasing the functioning of web search, recommendations in various filtering systems, scattering a technology (product) very quickly in the market. In online social networks, among all nodes, it is interesting and important to identify a node which can affect the behaviour of their neighbours; we call such node as Influential node. The main objective of this paper is to provide an overview of various techniques for Influential User identification. The paper also includes some techniques that are based on structural properties of online social networks and those techniques based on content published by the users of social network.
Article
Large-scale websites are predominantly built as a service-oriented architecture. Here, services are specialized for a certain task, run on multiple machines, and communicate with each other to serve a user's request. An anomalous change in a metric of one service can propagate to other services during this communication, resulting in overall degradation of the request. As any such degradation is revenue impacting, maintaining correct functionality is of paramount concern: it is important to find the root cause of any anomaly as quickly as possible. This is challenging because there are numerous metrics or sensors for a given service, and a modern website is usually composed of hundreds of services running on thousands of machines in multiple data centers. This paper introduces MonitorRank, an algorithm that can reduce the time, domain knowledge, and human effort required to find the root causes of anomalies in such service-oriented architectures. In the event of an anomaly, MonitorRank provides a ranked order list of possible root causes for monitoring teams to investigate. MonitorRank uses the historical and current time-series metrics of each sensor as its input, along with the call graph generated between sensors to build an unsupervised model for ranking. Experiments on real production outage data from LinkedIn, one of the largest online social networks, shows a 26% to 51% improvement in mean average precision in finding root causes compared to baseline and current state-of-the-art methods.
Article
Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature was found to be a consequence of two generic mech-anisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems.
Article
It is a fundamental and important issue to identify influential, nodes in complex network. In the existing evidential, semi-local centrality, it, modified the evidential centrality according to the actual, degree distribution, but, the topological, connections among the neighbors of a. node in weighted network are not taken into account. In this paper, a novel measure called evidential local structure centrality is proposed to identify influential nodes. Firstly, the value of modified evidential centrality is calculated by taking actual, degree distribution. Secondly, local structure centrality combined with modified evidential centrality is extended to be applied in weighted networks. Then, in order to evaluate the performance of the proposed method, we use the susceptible-infected-recovered (SIR) model, and susceptible-infected (SI) model to simulate the spreading process on real networks. Experiment results show that, our method is effective and efficient, to identify influential nodes.
Chapter
Introduction Voltage Delivered from a Source to a Load Power Delivered from a Source to a Load Impedance Conjugate Matching Additional Effect of Impedance Matching Appendices Reference Further Reading Exercises Answers