## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature was found to be a consequence of two generic mech-anisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems.

To read the full-text of this research,

you can request a copy directly from the authors.

... In addition to GNN-based methods, the concept of the subgraph link prediction can be extended to loworder heuristics link predictors, like Common Neighbor [1], Adamic-Adar index [20], Preferential Attachment [37], Jaccard Index [38], and Resource Allocation [39]. The predictors with the order r can be computed by the subgraph G r i,j . ...

... Early studies on link prediction problems mainly focus on heuristics methods, which require expertise on the underlying trait of network or hand-crafted features, including Common Neighbor [1], Adamic-Adar index [20] and Preferential Attachment [37], etc. WLNM [56] suggests a method to encode the induced subgraph of the target link as an adjacency matrix to represent the link. With the huge success of GNN [9], GNN-based link prediction methods have become dominant across different areas. ...

... However, other link predictors, including Preferential Attachment (PA) [37] and Jaccard Index (Jac) [38], are not Edge Invariant. The existence/absence of target link can change the values of the predictors. ...

Link prediction is a crucial problem in graph-structured data. Due to the recent success of graph neural networks (GNNs), a variety of GNN-based models were proposed to tackle the link prediction task. Specifically, GNNs leverage the message passing paradigm to obtain node representation, which relies on link connectivity. However, in a link prediction task, links in the training set are always present while ones in the testing set are not yet formed, resulting in a discrepancy of the connectivity pattern and bias of the learned representation. It leads to a problem of dataset shift which degrades the model performance. In this paper, we first identify the dataset shift problem in the link prediction task and provide theoretical analyses on how existing link prediction methods are vulnerable to it. We then propose FakeEdge, a model-agnostic technique, to address the problem by mitigating the graph topological gap between training and testing sets. Extensive experiments demonstrate the applicability and superiority of FakeEdge on multiple datasets across various domains.

... synthetic dataset is composed of four different models: small-world [31], random [32], geographical [33] and scale-free [34], and the remainder of the synthetic datasets are based in the models from the latter one. As for the real world applications, they are composed of one social network and six metabolic network datasets. ...

... • 4-models synthetic-database: This database has synthetic networks generated according to 4 distinct network classes: Random [32], smallworld [31], scale-free [34] and geographical [33]. • Noisy-synthetic-database: Using the models from the 4-models synthetic database and the scalefree-synthetic-database, this database consisted of modifying the topology of the networks by the addition or removal of edges. ...

Network modeling has proven to be an efficient tool for many interdisciplinary areas, including social, biological, transport, and many other real world complex systems. In addition, cellular automata (CA) are a formalism that has been studied in the last decades as a model for exploring patterns in the dynamic spatio-temporal behavior of these systems based on local rules. Some studies explore the use of cellular automata to analyze the dynamic behavior of networks, denominating them as network automata (NA). Recently, NA proved to be efficient for network classification, since it uses a time-evolution pattern (TEP) for the feature extraction. However, the TEPs explored by previous studies are composed of binary values, which does not represent detailed information on the network analyzed. Therefore, in this paper, we propose alternate sources of information to use as descriptor for the classification task, which we denominate as density time-evolution pattern (D-TEP) and state density time-evolution pattern (SD-TEP). We explore the density of alive neighbors of each node, which is a continuous value, and compute feature vectors based on histograms of the TEPs. Our results show a significant improvement compared to previous studies at five synthetic network databases and also seven real world databases. Our proposed method demonstrates not only a good approach for pattern recognition in networks, but also shows great potential for other kinds of data, such as images.

... The intuition can be gleaned from scale-free networks, where the proportion of nodes with degree tends is proportional to − . Under the most common scale-free model, Barabási-Albert [3], where = 3 and the average degree is 2 , splitting using arithmetic mean yields unbalanced partitions with 1/(8 3 ) fraction of nodes. Even for modest values of this quickly becomes unbalanced (i.e. when = 3 the partition will be split 1 : 216). ...

... and is available for download. 3 ...

We propose quasi-stable coloring, an approximate version of stable coloring. Stable coloring, also called color refinement, is a well-studied technique in graph theory for classifying vertices, which can be used to build compact, lossless representations of graphs. However, its usefulness is limited due to its reliance on strict symmetries. Real data compresses very poorly using color refinement. We propose the first, to our knowledge, approximate color refinement scheme, which we call quasi-stable coloring. By using approximation, we alleviate the need for strict symmetry, and allow for a tradeoff between the degree of compression and the accuracy of the representation. We study three applications: Linear Programming, Max-Flow, and Betweenness Centrality, and provide theoretical evidence in each case that a quasi-stable coloring can lead to good approximations on the reduced graph. Next, we consider how to compute a maximal quasi-stable coloring: we prove that, in general, this problem is NP-hard, and propose a simple, yet effective algorithm based on heuristics. Finally, we evaluate experimentally the quasi-stable coloring technique on several real graphs and applications, comparing with prior approximation techniques. A reference implementation and the experiment code are available at https://github.com/mkyl/QuasiStableColors.jl

... Fairness, modelled using the Ultimatum Game , has seen relatively little attention in the literature, and previous works have so far only considered an ideal world in which interactions are perfectly homogeneous (Cimpeanu et al., 2021). Nevertheless, real-world networks of individuals, such as social networks and networks of collaboration, are inherently heterogeneous (Barabási and Albert, 1999). Moreover, in the context of Evolutionary Game Theory (EGT), scale-free networks imply more than the underlying interaction structure. ...

... The Barabási and Albert (BA) model (Barabási and Albert, 1999) is one of the most famous models used in the study of highly heterogeneous, complex networks. The main features of the BA model are that it follows a preferential attachment rule, has a small clustering coefficient, and a typical power-law degree distribution. ...

Institutions and investors are constantly faced with the challenge of appropriately distributing endowments. No budget is limitless and optimising overall spending without sacrificing positive outcomes has been approached and resolved using several heuristics. To date, prior works have failed to consider how to encourage fairness in a population where social diversity is ubiquitous, and in which investors can only partially observe the population. Herein, by incorporating social diversity in the Ultimatum game through heterogeneous graphs, we investigate the effects of several interference mechanisms which assume incomplete information and flexible standards of fairness. We quantify the role of diversity and show how it reduces the need for information gathering, allowing us to relax a strict, costly interference process. Furthermore, we find that the influence of certain individuals, expressed by different network centrality measures, can be exploited to further reduce spending if minimal fairness requirements are lowered. Our results indicate that diversity changes and opens up novel mechanisms available to institutions wishing to promote fairness. Overall, our analysis provides novel insights to guide institutional policies in socially diverse complex systems.

... We elaborate on the datasets used for evaluation, the baseline approaches for comparison, the performance metrics used, and the experimental setup and parameters. (Leskovec, Huttenlocher and Kleinberg, 2010a,b) 889 2914 40 Twitch (Rozemberczki, Allen and Sarkar, 2019) 7,126 35,324 20 BA (Barabási and Albert, 1999) 2000 9974 14 Soc Hamsterster (Hamsterster) 2,400 16,600 169 PGP (Boguñá, Pastor-Satorras, Diaz-Guilera and Arenas, 2004) 10,638 24,301 104 PCG (Holme and Kim, 2002) 2000 9963 21 p2p-Gnutella04 (Ripeanu, Iamnitchi and Foster, 2002) 10,876 39,994 24 Email-univ (Leskovec, Kleinberg and Faloutsos, 2007) 1100 5500 11 ...

... • BA (Barabási and Albert, 1999): BA is a random graph generated using Barabási-Albert preferential attachment model. A network of n nodes is generated by adding new nodes, each having m edges that are preferentially coupled to existing nodes with a high degree. ...

In recent years, social networking platforms have gained significant popularity among the masses like connecting with people and propagating ones thoughts and opinions. This has opened the door to user-specific advertisements and recommendations on these platforms, bringing along a significant focus on Influence Maximisation (IM) on social networks due to its wide applicability in target advertising, viral marketing, and personalized recommendations. The aim of IM is to identify certain nodes in the network which can help maximize the spread of certain information through a diffusion cascade. While several works have been proposed for IM, most were inefficient in exploiting community structures to their full extent. In this work, we propose a community structures-based approach, which employs a K-Shell algorithm in order to generate a score for the connections between seed nodes and communities for low-budget scenarios. Further, our approach employs entropy within communities to ensure the proper spread of information within the communities. We choose the Independent Cascade (IC) model to simulate information spread and evaluate it on four evaluation metrics. We validate our proposed approach on eight publicly available networks and find that it significantly outperforms the baseline approaches on these metrics, while still being relatively efficient.

... The resulting SCs are characterized by two distributions, the classic degree distribution P(k), capturing the fraction of nodes with degree k, and the generalized degree distribution P(k l ), where k l characterizes the number of triangles supported by each link l = (i, j). We show that our generative model always yields a power-law scaling in P(k), recovering the ubiquitously observed scale-free property 33,34 , and, at the same time, it allows full control over P(k l ), i.e. bounded or scale-free with any desired scaling exponent. Indeed, P(k l ) has been shown to play a crucial role in the emergence of collective behavior, such as synchronization 35 . ...

... In the context of pair-wise interactions, the most natural measure of centrality is a node's individual degree, capturing its potential dynamic impact on the system 43 . The discovery that most real world networks exhibit extreme levels of degree heterogeneity was disruptive -indicating that networks are highly centralized, with a potentially disproportionate role played by a small fraction of their components 33,34 . ...

The past two decades have seen significant successes in our understanding of networked systems, from the mapping of real-world networks to the establishment of generative models recovering their observed macroscopic patterns. These advances, however, are restricted to pairwise interactions and provide limited insight into higher-order structures. Such multi-component interactions can only be grasped through simplicial complexes, which have recently found applications in social, technological and biological contexts. Here we introduce, study, and characterize a model to grow simplicial complexes of order two, i.e. nodes, links and triangles. Specifically, through a combination of preferential and/or non preferential attachment mechanisms, the model constructs networks with a scale-free degree distribution and an either bounded or scale-free generalized degree distribution. Allowing to analytically control the scaling exponents we arrive at a highly general scheme by which one is able to construct ensembles of synthetic complexes displaying desired statistical properties.

... We shall remark that our approach is valid independently on the specific degree distribution properties of the pristine graph. However, the maximum degree of a node in a scale-free network generated by the preferential attachment method [14] is known to scale as √ N [15,16] and this implies that, for these networks, condition (3) is (from a given size on) always verified for any value of fixed cost c and any value of α, thus making them very good candidates for initializing the formation of ultra-small world structures. ...

... Therefore, to illustrate power and generality of the above Theorem, we performed a massive numerical trial by initializing our game on networks of N nodes generated with the Barabási-Albert (BA) algorithm [14], for α = 1 (i.e., adopting as benefit the weighted betweenness centrality), H = 3 and c = 0.15 √ N (to ensure a coherent scaling of the cost with that of the maximum degree in the network). With these stipulations, condition (3) becomes 0.3k ≥ 0.15 15,16], this means that condition (3) is verified at each value of N , and one then expects that the diameter at equilibrium would not exceed 6. ...

A wealth of evidence shows that real world networks are endowed with the small-world property i.e., that the maximal distance between any two of their nodes scales logarithmically rather than linearly with their size. In addition, most social networks are organized so that no individual is more than six connections apart from any other, an empirical regularity known as the six degrees of separation. Why social networks have this ultra-small world organization, whereby the graph's diameter is independent of the network size over several orders of magnitude, is still unknown. Here we show that the 'six degrees of separation' are the property featured by the equilibrium state of any network where individuals weigh between their aspiration to improve their centrality and the costs incurred in forming and maintaining connections. Thus, our results show how simple evolutionary rules of the kind traditionally associated with human cooperation and altruism can also account for the emergence of one of the most intriguing attributes of social networks.

... Overall, we found that inventor networks are characterized by rapidly changing compositions of inventors and links, contradicting the transaction cost theory (Ejermo and Karlsson 2006) as well as the assumptions of Barabási and Albert (1999). The networks of our sample showed a tendency to grow continuously since the number of discontinued inventors was more than compensated for by new inventors that mainly entered with a cooperative relationship. ...

The development of inventor networks is characterized by the addition of a significant number of new inventors, while a considerable number of incumbent inventors discontinue. We estimated the persistence of knowledge in the inventor networks of nine German regions using alternative assumptions about knowledge transfer. Based on these estimates, we analyzed how the size and structure of a network may influence knowledge persistence over time. In a final step, we assessed how persistent knowledge as well as the knowledge of new inventors affect the performance of regional innovation systems (RIS). The results suggest that the knowledge of new inventors is much more important for RIS performance than old knowledge that persists.

... In particular, concepts of complex networks are useful to unravel the connections and dynamics in large, complex, and dynamically-evolving systems, such as the hydrometric networks. For instance, the various (complex) networks-based measures that have been proposed to identify the type and properties of networks (e.g., Sabidussi 1966;Freeman 1979;Watts and Strogatz 1998;Barabási and Albert 1999;Girvan and Newman 2002;Gao et al. 2014) are particularly useful in the identification of the important and redundant elements (e.g., nodes) in networks, such as rainfall and streamflow stations in hydrometric networks. ...

Optimal design of hydrometric networks has been a long-standing problem in hydrology. Evaluation of the importance (or influence) of the individual monitoring stations is key to achieve an optimal design of a hydrometric network. The present study employs the concepts of complex networks towards assessing the importance of individual stations in a hydrometric network. For implementation, a streamflow network of 218 stations in Australia is studied, and monthly streamflow data of 26 years (1981–2006) are analyzed. Each station is considered as a node in the network and the connections between any pair of nodes are identified based on mutual information in the streamflow values. Six different node ranking measures are used to examine the importance of nodes in the network: degree centrality, betweenness centrality, closeness centrality, degree and influence of line, weighted degree betweenness, and clustering coefficient. Different threshold values of mutual information are also considered to examine the influence of threshold on the best node ranking measure. The six node ranking measures are evaluated using the decline rate of network efficiency. The results indicate that different node ranking measures identify different stations as the most important and least important in the network. Betweenness centrality and weighted degree betweenness generally perform the best in identifying the most important stations across the thresholds. The weighted degree betweenness measure outperforms the others in the identification of the least important stations, especially at higher thresholds. The clustering coefficient performs the worst in identifying the importance of stations in the streamflow monitoring network.

... • BA ( Barabási and Albert, 1999): BA is a random graph generated using Barabási-Albert preferential attachment model. A network of n nodes is generated by adding new nodes, each having m edges that are preferentially coupled to existing nodes with a high degree. ...

Over the last couple of decades, Social Networks have connected people on the web from across the globe and have become a crucial part of our daily life. These networks have also rapidly grown as platforms for propagating products, ideas, and opinions to target a wider audience. This calls for the need to find influential nodes in a network for a variety of reasons, including the curb of misinformation being spread across the networks, advertising products efficiently, finding prominent protein structures in biological networks, etc. In this paper, we propose Modified Community Diversity (MCD), a novel method for finding influential nodes in a network by exploiting community detection and a modified community diversity approach. We extend the concept of community diversity to a two-hop scenario. This helps us evaluate a node's possible influence over a network more accurately and also avoids the selection of seed nodes with an overlapping scope of influence. Experimental results verify that MCD outperforms various other state-of-the-art approaches on eight datasets cumulatively across three performance metrics.

... In particular, concepts of complex networks are useful to unravel the connections and dynamics in large, complex, and dynamically-evolving systems, such as the hydrometric networks. For instance, the various (complex) networks-based measures that have been proposed to identify the type and properties of networks (e.g., Sabidussi 1966;Freeman 1979;Watts and Strogatz 1998;Barabási and Albert 1999;Girvan and Newman 2002;Gao et al. 2014) are particularly useful in the identification of the important and redundant elements (e.g., nodes) in networks, such as rainfall and streamflow stations in hydrometric networks. ...

Optimal design of hydrometric networks has been a long-standing problem in hydrology. Evaluation of the importance (or influence) of the individual monitoring stations is key to achieve an optimal design of a hydrometric network. The present study employs the concepts of complex networks towards assessing the importance of individual stations in a hydrometric network. For implementation, a streamflow network of 218 stations in Australia is studied, and monthly streamflow data of 26 years (1981-2006) are analyzed. Each station is considered as a node in the network and the connections between any pair of nodes are identified based on mutual information in the streamflow values. Six different node ranking measures are used to examine the importance of nodes in the network: degree centrality, betweenness centrality, closeness centrality, degree and influence of line, weighted degree betweenness, and clustering coefficient. Different threshold values of mutual information are also considered to examine the influence of threshold on the best node ranking measure. The six node ranking measures are evaluated using the decline rate of network efficiency. The results indicate that different node ranking measures identify different stations as the most important and least important in the network. Betweenness centrality and weighted degree betweenness generally perform the best in identifying the most important stations across the thresholds. The weighted degree betweenness measure outperforms the others in the identification of the least important stations, especially at higher thresholds. The clustering coefficient performs the worst in identifying the importance of stations in the streamflow monitoring network.

... Another key property for efficient geometrical navigation is the existence of super-hubs that interconnect large parts of the network. This happens when the network degree distribution follows a power-law with exponent γ < 3 1,4 , in which case scale-free 6 networks are termed ultrasmall-world 1,7 . ...

We introduce in network geometry a measure of geometrical congruence (GC) to evaluate the extent a network topology follows an underlying geometry. This requires finding all topological shortest-paths for each nonadjacent node pair in the network: a nontrivial computational task. Hence, we propose an optimized algorithm that reduces 26 years of worst scenario computation to one week parallel computing. Analysing artificial networks with patent geometry we discover that, different from current belief, hyperbolic networks do not show in general high GC and efficient greedy navigability (GN) with respect to the geodesics. The myopic transfer which rules GN works best only when degree-distribution power-law exponent is strictly close to two. Analysing real networks—whose geometry is often latent—GC overcomes GN as marker to differentiate phenotypical states in macroscale structural-MRI brain connectomes, suggesting connectomes might have a latent neurobiological geometry accounting for more information than the visible tridimensional Euclidean.

... Random graphs and synthetic data. The experiments on synthetic data are conducted with DAGs generated from two sets of random graphs: (i) Erdős-Rényi (ER) graphs and (ii) Scale-free (SF) [BA99] graphs, as characterized in Table 1. ...

Inferring causal relationships from observational data is a fundamental yet highly complex problem when the number of variables is large. Recent advances have made much progress in learning causal structure models (SEMs) but still face challenges in scalability. This paper aims to efficiently discover causal DAGs from high-dimensional data. We investigate a way of recovering causal DAGs from inverse covariance estimators of the observational data. The proposed algorithm, called ICID (inverse covariance estimation and {\it independence-based} decomposition), searches for a decomposition of the inverse covariance matrix that preserves its nonzero patterns. This algorithm benefits from properties of positive definite matrices supported on {\it chordal} graphs and the preservation of nonzero patterns in their Cholesky decomposition; we find exact mirroring between the support-preserving property and the independence-preserving property of our decomposition method, which explains its effectiveness in identifying causal structures from the data distribution. We show that the proposed algorithm recovers causal DAGs with a complexity of $O(d^2)$ in the context of sparse SEMs. The advantageously low complexity is reflected by good scalability of our algorithm in thorough experiments and comparisons with state-of-the-art algorithms.

... The most intuitive approach of modeling such complex systems is to treat them as networks, where nodes represent component units and edges represent connectivity. Importantly, empirical findings have unraveled the presence of universal features in most socio-technical networks, e.g., small-world [10], scale-free (SF) [11], which inspires extensive studies towards a better understanding about the impact of population infrastructures (network connectivity) on dynamical processes [12][13][14][15], including robustness [16,17], synchronization [18][19][20], consensus [21][22][23][24], control [25][26][27][28], evolutionary game [29][30][31][32][33][34][35][36], traffic routing [37][38][39], selforganized criticality [40][41][42][43], etc. ...

An emerging disease is one infectious epidemic caused by a newly transmissible pathogen, which has either appeared for the first time or already existed in human populations, having the capacity to increase rapidly in incidence as well as geographic range. Adapting to human immune system, emerging diseases may trigger large-scale pandemic spreading, such as the transnational spreading of SARS, the global outbreak of A(H1N1), and the recent potential invasion of avian influenza A(H7N9). To study the dynamics mediating the transmission of emerging diseases, spatial epidemiology of networked metapopulation provides a valuable modeling framework, which takes spatially distributed factors into consideration. This review elaborates the latest progresses on the spatial metapopulation dynamics, discusses empirical and theoretical findings that verify the validity of networked metapopulations, and the application in evaluating the effectiveness of disease intervention strategies as well.

... For example, countries that are now dependent on will be more likely to be dependent on by more countries in the future. This internal trend of stronger countries is called preference dependence which was proposed by Barabasi and Albert [13]. That two countries achieve a transmission system of relations through one or more third-party countries, or form a community with sophisticated relations [14]. ...

As the foundation of the industrial economy, the equipment manufacturing industry takes an important position on the China-EU trade. Based on the analysis of the overall trend and structure of China-EU equipment manufacturing industry trade in 2007–2020, this article involves the construction of trade concentration into trade dependence metrics, and then calculate the degree of interdependence between China and EU equipment manufacturing trade in 2020. The perspective of the intra-industry specialization will be used to analyze China-EU equipment manufacturing trade dependency in 2020. The results show that: (1) Although China-EU equipment manufacturing trade has continued to grow, China had an imbalanced export structure to the EU, and electronic equipment exports are too high; (2) Regardless of import or export, the trade dependence of the EU countries on China about equipment manufacturing was higher than that of China on European countries; (3) China mainly depended on the EU about the high-end equipment manufacturing trade, which brings risks to Chinese manufacturing supply chains.

... The gut microbiome of normal, obese, and NASH groups exhibited distinct patterns of microbial interactions ( Figure 2). In normal subjects, the microbial interaction network exhibited a heterogeneous pattern (R 2 = 0.70 for power-law distribution) [34], characteristics of a typical scale-free network ( Figure 2A). The hub species that belong to Lachnospiraceae (Positive [P]/ Negative [N] = 58/47), Ruminococcaceae (P/N = 20/10), and Bacteroidaceae (P/N = 52/33) mainly imposed a positive impact, while those belonging to Family XI (P/ N = 0/38) exerted a mainly negative impact on other microbes in the normal gut microbiome ( Figure 2G and Supporting Information: Table S4). ...

The dysbiosis of the gut microbiome is one of the pathogenic factors of nonalcoholic fatty liver disease (NAFLD) and also affects the treatment and intervention of NAFLD. Among gut microbiomes, keystone species that regulate the integrity and stability of an ecological community have become the potential intervention targets for NAFLD. Here, we collected stool samples from 22 patients with nonalcoholic steatohepatitis (NASH), 25 obese patients, and 16 healthy individuals from New York for 16S rRNA gene sequencing. An algorithm was implemented to identify keystone species based on causal inference theories and dynamic intervention simulation. External validation was performed in an independent cohort from California. Eight keystone species in the gut of NAFLD, represented by Porphyromonas loveana, Alistipes indistinctus, and Dialister pneumosintes, were identified, which could efficiently restore the microbial composition of the NAFLD toward a normal gut microbiome with 92.3% recovery. These keystone species regulate intestinal amino acid metabolism and acid–base environment to promote the growth of the butyrate‐producing Lachnospiraceae and Ruminococcaceae species that are significantly reduced in NAFLD patients. Our findings demonstrate the importance of keystone species in restoring the microbial composition toward a normal gut microbiome, suggesting a novel potential microbial treatment for NAFLD. In this study, we applied an algorithm to the keystone species identification in the gut microbiome, based on current causal inference theories and the dynamic intervention simulation. We identified the nonalcoholic steatohepatitis (NASH) keystone species combination, represented by Porphyromonas loveana, Alistipes indistinctus, and Dialister pneumosintes, that showed the highest potential for the microbial intervention of NASH. The dysbiosis of butyrate‐producing bacteria is a critical factor contributing to the development of nonalcoholic fatty liver disease (NAFLD). The dysbiosis of butyrate‐producing bacteria is a critical factor contributing to the development of nonalcoholic fatty liver disease (NAFLD). Causal algorithm intergraded with ecological theory and dynamic intervention simulation could mine microbial keystone species from metagenomic data. Causal algorithm intergraded with ecological theory and dynamic intervention simulation could mine microbial keystone species from metagenomic data. Keystone species of nonalcoholic steatohepatitis, such as Porphyromonas loveana, Alistipes indistinctus, and Dialister pneumosintes, provided potential precise intervention strategies for NAFLD treatment. Keystone species of nonalcoholic steatohepatitis, such as Porphyromonas loveana, Alistipes indistinctus, and Dialister pneumosintes, provided potential precise intervention strategies for NAFLD treatment.

... It is important to note that this is not a statistical result, in the sense that as long as there is one node targeted by many others with low out-degree, there will be almost no room for ranking control, regardless of the topology of the rest of the network. As we will see later, this is very reminiscent of the scale-free [21] network paradigm: indeed, scale-free networks present these high in-degree nodes pointed to by low out-degree ones. ...

The PageRank algorithm is a landmark in the development of the Internet as we know it, and its network-theoretic properties are still being studied. Here we present a series of results regarding the parametric controllability of its centralities, and develop a geometric method to crack the problem of assessing the control of its rankings. We apply these methods to the biplex PageRank case, comparing both centrality measures by means of numerical computations on real and synthetic network datasets.

... Among the many models and approaches to generate networks that have been developed [13,14,15,16,17,18], most emphasize simulating the overall network topology and rarely consider the role of vertex attributes. Exponential random graph models (ERGMs) are among the more flexible options [16], although have been noted to have unstable parameter estimation on large networks and those with dyadic depedent terms [18]. ...

Protecting medical privacy can create obstacles in the analysis and distribution of healthcare graphs and statistical inferences accompanying them. We pose a graph simulation model which generates networks using degree and property augmentation (GRANDPA) and provide a flexible R package that allows users to create graphs that preserve vertex attribute relationships and approximating retaining topological properties observed in the original graph (e.g., community structure). We support our proposed algorithm using a case study based on Zachary's karate network and a patient-sharing graph generated from Medicare claims data in 2019. In both cases, we find that community structure is preserved, and normalized root mean square error between cumulative distributions of the degrees is low (0.0508 and 0.0514 respectively).

... Graph generative models date back to the Erdős-Rényi model (Erdös & Rényi, 1959), of which the probability of generating individual edges is the same. Other well-known graph generative models include the stochastic block model (Holland et al., 1983), the small-world model (Watts & Strogatz, 1998), and the preferential attachment model (Barabási & Albert, 1999). Recently, deep graph generative models instead parameterize the probability of generating edges and nodes using deep neural networks in, e.g., the auto-regressive fashion (Li et al., 2018;You et al., 2018;Liao et al., 2019) or variational autoencoder fashion (Kipf & Welling, 2016;Grover et al., 2018;. ...

Neural architectures can be naturally viewed as computational graphs. Motivated by this perspective, we, in this paper, study neural architecture search (NAS) through the lens of learning random graph models. In contrast to existing NAS methods which largely focus on searching for a single best architecture, i.e, point estimation, we propose GraphPNAS a deep graph generative model that learns a distribution of well-performing architectures. Relying on graph neural networks (GNNs), our GraphPNAS can better capture topologies of good neural architectures and relations between operators therein. Moreover, our graph generator leads to a learnable probabilistic search method that is more flexible and efficient than the commonly used RNN generator and random search methods. Finally, we learn our generator via an efficient reinforcement learning formulation for NAS. To assess the effectiveness of our GraphPNAS, we conduct extensive experiments on three search spaces, including the challenging RandWire on TinyImageNet, ENAS on CIFAR10, and NAS-Bench-101/201. The complexity of RandWire is significantly larger than other search spaces in the literature. We show that our proposed graph generator consistently outperforms RNN-based one and achieves better or comparable performances than state-of-the-art NAS methods.

... Trace back to the 1960s, the study of complex networks was initialed by two mathematicians Erdős and Rényi, who founded systematic research about random graph [1]. In 1998, small-world property was uncovered by Watts and Strogatz [2], and then in 1999, scale-free property was discovered by Barabási and Albert [3]. These three researches are the basis of complex network study, and the latter two also bridged the gap between theory and reality. ...

Identifying important nodes in complex networks is essential in theoretical and applied fields. A small number of such nodes have deterministic power to decide information spreading, so it is of importance to find a set of nodes that maximize the propagation in networks. Based on baseline ranking methods, various improved methods were proposed, but there does not exist one enhanced method that covers all the base methods. In this paper, we propose a penalized method called RCD-Map, which is short for resampling community detection to maximize propagation, on five baseline ranking methods(Degree centrality, Closeness centrality, Betweennees centrality, K-shell and PageRank) with nodes' local community information. We perturbed the original graph by resampling to decrease the biases and randomness brought by community detection methods-both overlapping and non-overlapping methods. To assess the performance of our identifying method, SIR(susceptible-infected-recovered) model is applied to simulate the information propagation process. The result shows that methods with penalties perform better with a vaster propagation range in general.

... Representing agents as nodes and interactions as links is a general enough description to fit many real systems, such as protein-protein interaction networks, social networks, transportation networks, electrical grids, and many others. Over the years, network science has lead to the realisation that these different systems share many common structural properties, and it is often possible to gain system-specific insights through general network-based problems and tools [2,3,4,5,6]. In the study of human disease, for example, the sub-field of Network Medicine has emerged from the success of network-based tools in problems such as the prediction of drug-combinations and cancer-driver genes [7,8,9]. ...

Link prediction methods use patterns in known network data to infer which connections may be missing. Previous work has shown that continuous-time quantum walks can be used to represent path-based link prediction, which we further study here to develop a more optimized quantum algorithm. Using a sampling framework for link prediction, we analyse the the query access to the input network required to produce a certain number of prediction samples. Considering both well-known classical path-based algorithms using powers of the adjacency matrix as well as our proposed quantum algorithm for path-based link prediction, we argue that there is a polynomial quantum advantage on the dependence on $N$, the number of nodes in the network. We further argue that the quantum complexity of our quantum link prediction algorithm, although sub-linear in $N$, is limited by the complexity of performing a quantum simulation of the network's adjacency matrix, which may prove to be an important problem in the development of quantum algorithms for network science in general.

... There are many generative models for unsigned networks, including preferential attachment [Barabási and Albert, 1999], Erdös-Rényi [Erdös et al., 1960], Kronecker [Leskovec et al., 2010], and directed scale-free [Bollobás et al., 2003], but only few generative models for signed networks [Jung et al., 2020, Derr et al., 2018. For the unsigned networks, there have been developed model selection frameworks assessing the fit of the model to the dataset based on the maximum likelihood [Leskovec et al., 2010, Bezáková et al., 2006. ...

Signed networks, i.e., networks with positive and negative edges, commonly arise in various domains from social media to epidemiology. Modeling signed networks has many practical applications, including the creation of synthetic data sets for experiments where obtaining real data is difficult. Influential prior works proposed and studied various graph topology models, as well as the problem of selecting the most fitting model for different application domains. However, these topology models are typically unsigned. In this work, we pose a novel Maximum-Likelihood-based optimization problem for modeling signed networks given their topology and showcase it in the context of gene regulation. Regulatory interactions of genes play a key role in organism development, and when broken can lead to serious organism abnormalities and diseases. Our contributions are threefold: First, we design a new class of signage models for a given topology. Based on the parameter setting, we discuss its biological interpretations for gene regulatory networks (GRNs). Second, we design algorithms computing the Maximum Likelihood -- depending on the parameter setting, our algorithms range from closed-form expressions to MCMC sampling. Third, we evaluated the results of our algorithms on synthetic datasets and real-world large GRNs. Our work can lead to the prediction of unknown gene regulations, the generation of biological hypotheses, and realistic GRN benchmark datasets.

... In terms of synthetic data, we have created 10 graphs over each of Erdos-Renyi (RDS), ForestFire (FF) [17], LPA [4], DMC [30] models. For all these models, each graph includes 500 nodes and 5000 edges, and we have sampled the networks at uniform over their parameter space. ...

... This view is adopted for network structural analysis, traffic forecasting, abnormal pattern detection, and global traffic safety analysis. For instance, [49]- [52] consider traffic as an application of complex network theory, where the network dynamics can be represented by the collection of small-world networks [53] and random scalefree networks [54]. A small-world network is defined as a network constructed with a high clustering coefficient with small average geodesics, namely the pairwise shortest path lengths. ...

Driving safety analysis has recently experienced unprecedented improvements thanks to technological advances in precise positioning sensors, artificial intelligence (AI)-based safety features, autonomous driving systems, connected vehicles, high-throughput computing, and edge computing servers. Particularly, deep learning (DL) methods empowered volume video processing to extract safety-related features from massive videos captured by roadside units (RSU). Safety metrics are commonly used measures to investigate crashes and near-conflict events. However, these metrics provide limited insight into the overall network-level traffic management. On the other hand, some safety assessment efforts are devoted to processing crash reports and identifying spatial and temporal patterns of crashes that correlate with road geometry, traffic volume, and weather conditions. This approach relies merely on crash reports and ignores the rich information of traffic videos that can help identify the role of safety violations in crashes. To bridge these two perspectives, we define a new set of network-level safety metrics (NSM) to assess the overall safety profile of traffic flow by processing imagery taken by RSU cameras. Our analysis suggests that NSMs show significant statistical associations with crash rates. This approach is different than simply generalizing the results of individual crash analyses, since all vehicles contribute to calculating NSMs, not only the ones involved in crash incidents. This perspective considers the traffic flow as a complex dynamic system where actions of some nodes can propagate through the network and influence the crash risk for other nodes. The analysis is carried out using six video cameras in the state of Arizona along with a 5-year crash report obtained from the Arizona Department of Transportation (ADOT). The results confirm that NSMs modulate the baseline crash probability. Therefore, online monitoring of NSMs can be used by traffic management teams and AI-based traffic monitoring systems for risk analysis and traffic control.

... We need three steps to this process: 1) generating the weighted graphs G W and G P , and adjacency matrix A; 2) generating data matrices X and M based on G W and G P ; 3) running all algorithms on all or partial of X, M and A based on whether the model considers this kind of information and computing metrics respectively. Particularly, following (Pamfil et al. 2020), we use either the Erdős-Rényi (ER) model (Newman 2018) or the Barabási-Albert (BA) model (Barabási and Albert 1999) to generate intra-slice graphs G W . And for inter-slice graph G P , we use ER model or Stochastic Block Model (SBM) (Newman 2018). ...

Estimating the structure of directed acyclic graphs (DAGs) of features (variables) plays a vital role in revealing the latent data generation process and providing causal insights in various applications. Although there have been many studies on structure learning with various types of data, the structure learning on the dynamic graph has not been explored yet, and thus we study the learning problem of node feature generation mechanism on such ubiquitous dynamic graph data. In a dynamic graph, we propose to simultaneously estimate contemporaneous relationships and time-lagged interaction relationships between the node features. These two kinds of relationships form a DAG, which could effectively characterize the feature generation process in a concise way. To learn such a DAG, we cast the learning problem as a continuous score-based optimization problem, which consists of a differentiable score function to measure the validity of the learned DAGs and a smooth acyclicity constraint to ensure the acyclicity of the learned DAGs. These two components are translated into an unconstraint augmented Lagrangian objective which could be minimized by mature continuous optimization techniques. The resulting algorithm, named GraphNOTEARS, outperforms baselines on simulated data across a wide range of settings that may encounter in real-world applications. We also apply the proposed approach on two dynamic graphs constructed from the real-world Yelp dataset, demonstrating our method could learn the connections between node features, which conforms with the domain knowledge.

... al. 2009; Charoenwong et al. 2020;Tóth et al. 2021). It is well documented that human social networks are more clustered than what could be expected from sensible null models(Erdős et al. 1960;Barabási et al. 1999). Closure is typically measured as the average of the local clustering coefficient over all nodes, which captures the fraction of closed triangles between a node's direct neighbors. ...

Large-scale human social network structure is typically inferred from digital trace samples of online social media platforms or mobile communication data. Instead, here we investigate the social network structure of a complete population, where people are connected by high-quality links sourced from administrative registers of family, household, work, school, and next-door neighbors. We examine this multilayer social opportunity structure through three common concepts in network analysis: degree, closure, and distance. Findings present how particular network layers contribute to presumably universal scale-free and small-world properties of networks. Furthermore, we suggest a novel measure of excess closure and apply this in a life-course perspective to show how the social opportunity structure of individuals varies along age, socio-economic status, and education level. Our work provides new entry points to understand individual socio-economic failure and success as well as persistent societal problems of inequality and segregation.

The most common way of magma transfer towards the surface is through dyking. Dykes can generate stresses at their tips and the surrounding host rock, initiating surficial deformation, seismic activity, and graben formation. Although scientists can study active deformation and seismicity via volcano monitoring, the conditions under which dykes induce grabens during their emplacement in the shallow crust are still enigmatic. Here, we explore through FEM numerical modelling the conditions that could have been associated with dyke-induced graben formation during the 1928 fissure eruption on Mt. Etna (Italy). We use stratigraphic data of the shallow host rock successions along the western and eastern sections of the fissure that became the basis for several suites of numerical models and sensitivity tests. The layers had dissimilar mechanical properties, which allowed us to investigate the studied processes more realistically. We investigated the boundary conditions using a dyke overpressure range of 1–10 MPa and a local extensional stress field of 0.5–2 MPa. We studied the effect of field-related geometrical parameters by employing a layer thickness range of 0.1–55 m and a variable layer sequence at the existing stratigraphy. We also tested how more compliant pyroclastics, such as scoria, (if present) could have affected the accumulation of stresses around the dyke. Also, we explored how inclined sheets and vertical dykes can generate grabens at the surface. We propose that the mechanical heterogeneity of the flank succession and the local extensional stress field can largely control both the dyke path and dyke-induced graben formation regardless of increased dyke overpressure values. Similarly, soft materials in the stratigraphy can greatly suppress the shear stresses in the vicinity of a propagating dyke, encouraging narrow grabens at the surface if only the fracturing condition is satisfied, while inclined sheets tend to form semigrabens, respectively. Finally, we provide some insights related to the structural evolution of the 1928 lateral dyking event. All the latter can be theoretically applied in similar case studies worldwide.

For decades, the network-organized reaction-diffusion models have been widely used to study ecological and epidemiological phenomena in discrete space. However, the high dimensionality of these nonlinear systems places a long-standing restriction to develop the normal forms of various bifurcations. In this paper, we take an important step to present a rigorous procedure for calculating the normal form associated with the Hopf bifurcation of the general network-organized reaction-diffusion systems, which is similar to but can be much more intricate than the corresponding procedure for the extensively explored PDE systems. To show the potential applications of our obtained theoretical results, we conduct the detailed Hopf bifurcation analysis for a multi-patch predator-prey system defined on any undirected connected underlying network and on the particular non-periodic one-dimensional lattice network. Remarkably, we reveal that the structure of the underlying network imposes a significant effect on the occurrence of the spatially nonhomogeneous Hopf bifurcations.

With the rapid development of technology, the number of packets that need to be transmitted in the network is increasing, which leads to network congestion and reduces the user experience. Therefore, it is necessary to adjust the transmission path of packets in the network to improve the transmission threshold of packets in the network. In this paper, we abstract the real world network into a scale-free network for research, using nodes to represent individuals in reality, and edges to represent connections between individuals. Scale-free networks have serious inhomogeneity. Nodes in scale-free networks cannot be connected to all nodes in the network, which makes some gaps in the network. “Structural hole” describes the gaps in the network, which make some nodes have certain controllability to the whole network. But it only considers the relationship between nodes and their neighbors, without considering the overall structure of the whole network topology. K-shell decomposition algorithm can efficiently and accurately identify the location of nodes in the network. Therefore, K-shell decomposition algorithm is a global index. However, it still has shortcomings that does not consider the topological relationship between neighbors. Both structural hole and K-shell theory can characterize the importance of nodes. The combination of the two can make up for each other’s flaws and make the importance of nodes in the network more even. The more even the node importance, the more even the load of the node, the greater the increase in network traffic capacity. In this paper, we propose a network edge-adding strategy combining the structural hole theory and the k-shell algorithm to improve the traffic capacity. Extensive simulations have been performed to estimate the effectiveness of the proposed method under the efficient routing strategy. According to the simulation results, we can see that when the network size is fixed, regardless of the average degree of nodes, our proposed strategy improves the network traffic capacity, reduces the maximum betweenness centrality of nodes, and makes the load of nodes in the network more average. At the same time, our strategy improves the robustness of the network.

In this paper we propose an evolving network model, which is a randomized version of the pseudofractral graphs by introducing an evolutionary parameter 0<p<1. Our network model grows exponentially over time, and can be generated in an iterative manner: at each time step, with probability p each existing edge recruits independently a new node and connects to it with both endpoints. We first briefly discuss the network size, which can correspond to a supercritical branching process. Then, it shows that the asymptotic degree distribution in our network model can be uniquely determined by a functional equation of its probability generating function.

Epidemic outbreaks are often accompanied by the unconfirmed information propagation Especially in the early stages of an epidemic outbreak, because of the lack of adequate verification, some unconfirmed information appears, which has a significant impact on the epidemic. Meanwhile, individuals under different emotions can also have different response to epidemics. In the paper, an interplay between epidemic spreading and unconfirmed information propagation model is established considering individuals’ emotional factors in multilayer networks. The mean-field method is used to analyze the interaction dynamic propagation process and the threshold of epidemic is obtained. Finally through the theoretical analysis and numerical simulations of scale-free network, the validity of the results is verified. The results show that the information, although unconfirmed, is still conducive to curb the spread of the epidemic. In addition, individuals with different emotions will adopt different self-protective behaviors, so as to further affect the spread of the epidemic.

The Virtual Microgrid (VM) method is a solution for addressing challenges in Conventional Distribution Network (CDN), such as power fluctuations or load mismatches, by actively partitioning the CDN into interconnected Microgrid-style VMs. Previous studies have fewer discussions about the mutual interaction between the grid’s partition performance and Distributed Energy Resources (DERs) allocation. This paper proposes a new approach for dividing a large power grid into clusters by using the complex network theorem. The approach integrates power flow dynamic, line impedance, generator-load relations and power generator cost-efficiency into a single static weighted adjacency matrix. Meanwhile, a multi-objective Genetic Algorithm (GA) planning structure is also denoted for transforming a CDN to VMs with mutual interaction between partition and DER allocation. The proposed metric is tested in both transmission and distribution networks. The IEEE 118-bus system test shows that even with a higher value of the proposed indicator, there are fewer power exchanges between sub-networks. Meanwhile, in the 69-bus radial system tests, the GA-based co-planning method outperforms previous methods in forming more self-sufficient and more efficient interconnected VMs. An intermediate solution is suggested by implementing a trade-off between inter-VM power exchange and the operation cost.

Inferring the topology of a network from network dynamics is a significant problem with both theoretical research significance and practical value. This paper considers how to reconstruct the network topology according to the continuous-time data on the network. Inspired by the generative adversarial network(GAN), we design a deep learning framework based on network continuous-time data. The framework predicts the edge connection probability between network nodes by learning the correlation between network node state vectors. To verify the accuracy and adaptability of our method, we conducted extensive experiments on scale-free networks and small-world networks at different network scales using three different dynamics: heat diffusion dynamics, mutualistic interaction dynamics, and gene regulation dynamics. Experimental results show that our method significantly outperforms the other five traditional correlation indices, which demonstrates that our method can reconstruct the topology of different scale networks well under different network dynamics.

Mobile Instant Messengers (MIMs) are vastly used for communication and information sharing in recent years. However, interesting features of these applications such as group-based communication and broadcasting in channels cause rumors also to be spread in MIM networks more quickly than ordinary social networks. Although there are lots of works on modeling, analysis, and controlling rumor dissemination in social networks, the mentioned features of MIMs are not almost considered. In this paper we propose a new model for soft rumor control in MIMs that considers rumor propagation in groups and channels. By soft rumor control we mean measures for enhancing the people’s knowledge and awareness against the rumor to persuade them avoiding rumor dissemination. We suggest two soft rumor control mechanisms including a provenance based decision making process and making anti-rumor campaigns. In the first mechanism, in order to improve the ability of users to take proper actions against rumors, they are equipped with rumor provenance information including the level of trust to rumor spreader, reputation of the source of rumor and the degree of credibility of the rumor. In the second mechanism, some MIM users who have more serious concerns about the rumor effects try making an anti-rumor campaign to fight spreading the rumor. The proposed model is formalized as an extended Partially Observable Markov Decision Process (POMDP) to capture the dynamics of rumor propagation and the control mechanisms. To evaluate the proposed model, we conduct a number of extensive agent-based simulation experiments on a synthesized MIM network that show the effectiveness of the proposed mechanisms to control rumor propagation. We also conduct interesting sensitivity analysis to see the effects of different model parameters on the dynamics of the rumor propagation with control mechanisms. The proposed model helps MIM developers to provide facilities to control rumor by collective wisdom. Furthermore, it helps people, NGOs, political parties, and so on to improve their rumor fighting strategies by making properly designed anti-rumor campaigns.

Computational social science has become a branch of social science that uses computationally intensive ways to investigate and model social phenomena. Exploitation on mathematics, physics, and computer sciences, and analytic approaches like Social Network Analysis (SNA), Machine Learning (ML), etc, develops and tests the theories of complex social phenomena. In the emerging environment of social media, the new characteristics of social collective behavior and its extensive phenomena have become the hot spot of common concern across many disciplines. In this paper, we propose a general quantitative framework to discover the social collective behavior in temporal social networks. The general framework incorporates the Time-Correlation Function (T.C.F.) in statistical physics and evolutionary approach in Machine Learning, and provides the quantitative evidence of the existence of social collective behavior. Results show collective behaviors are observed and there exists a tiny fraction of users whose behavior are constantly replicated by public, disregard of the behavior itself. Our method is assumption-independent and has the potential to be applied to various temporal systems.

Extant studies suggest that the proximity between the researchers and their structural positioning in the collaboration network may influence productivity and performance in collaboration research. In this paper, we analyze the co-authorship networks of the three countries, viz. the USA, China, and India, constructed in consecutive non-overlapping 5-year long time windows from bibliometric data of research papers published in the past decade in the rapidly evolving area of Artificial Intelligence and Machine Learning (AI&ML). Our analysis relies on the observations ensued from a comparison of the statistical properties of the evolving networks. We consider macro-level network properties which describe the global characteristics, such as degree distribution, assortativity, and large-scale cohesion etc., as well as micro-level properties associated with the actors who have assumed central positions, defining a core in the network assembly with respect to closeness centrality measure. For the analysis of the core actors, who are well connected with a large number of other actors, we consider share of their affiliations with domestic institutes. We find dominant representation of domestic affiliations of the core actors for high productivity cases, such as China in the second time window and the USA in the first and second both. Our study, therefore, suggests that the domestic affiliation of the core actors, who could access network resources more efficiently than other actors, influences and catalyzes the collaborative research.

Clusters of genetically similar infections suggest rapid transmission and may indicate priorities for public health action or reveal underlying epidemiological processes. However, clusters often require user-defined thresholds and are sensitive to non-epidemiological factors, such as non-random sampling. Consequently the ideal threshold for public health applications varies substantially across settings. Here, we show a method which selects optimal thresholds for phylogenetic (subset tree) clustering based on population. We evaluated this method on HIV-1 pol datasets ( n = 14, 221 sequences) from four sites in USA (Tennessee, Washington), Canada (Northern Alberta) and China (Beijing). Clusters were defined by tips descending from an ancestral node (with a minimum bootstrap support of 95%) through a series of branches, each with a length below a given threshold. Next, we used pplacer to graft new cases to the fixed tree by maximum likelihood. We evaluated the effect of varying branch-length thresholds on cluster growth as a count outcome by fitting two Poisson regression models: a null model that predicts growth from cluster size, and an alternative model that includes mean collection date as an additional covariate. The alternative model was favoured by AIC across most thresholds, with optimal (greatest difference in AIC) thresholds ranging 0.007–0.013 across sites. The range of optimal thresholds was more variable when re-sampling 80% of the data by location (IQR 0.008 − 0.016, n = 100 replicates). Our results use prospective phylogenetic cluster growth and suggest that there is more variation in effective thresholds for public health than those typically used in clustering studies.

In recent years, some game models were proposed to protect critical infrastructure networks. But they mainly focused on the protection of key nodes, and there are rarely models to consider the fixed-point use of resources. Hence, in this paper, we propose a non-zero-sum simultaneous game model based on the Cournot model. Meanwhile, we presented a novel method of critical node centrality identification based on the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). Simulating the game analysis on scale-free networks, small-world networks and random networks, it is found that the fixed operating nodes and the network topology are key factors in payoffs considering the constraints of resources. Besides, robustness analysis of networks on various sensitivity parameters is given and some effective optimal strategies are acquired to provide decision support for policy-makers.

As a measure of complexity, information entropy is frequently used to categorize time series, such as machinery failure diagnostics, biological signal identification, etc., and is thought of as a characteristic of dynamic systems. Many entropies, however, are ineffective for multivariate scenarios due to correlations. In this paper, we propose a local structure entropy (LSE) based on the idea of a recurrence network. Given certain tolerance and scales, LSE values can distinguish multivariate chaotic sequences between stochastic signals. Three financial market indices are used to evaluate the proposed LSE. The results show that the LSEFSTE100 and LSES&P500 are higher than LSESZI, which indicates that the European and American stock markets are more sophisticated than the Chinese stock market. Additionally, using decision trees as the classifiers, LSE is employed to detect bearing faults. LSE performs higher on recognition accuracy when compared to permutation entropy.

Utilizing a meaningful graph plays an essential part in the performance of graph-based algorithms. However, a ground-truth graph representing the relationships between data points is not readily available in many applications. This paper proposes a graph learning method based on sensitivity analysis over a deep learning framework called GL-SADL. The proposed method is composed of two steps. First, it estimates the signal value for each vertex using the signal values corresponding to the other vertices with a Deep Neural Network (DNN) block. Then a sensitivity analysis approach is applied to each DNN block to determine how the input signal values influence the DNN’s response. This procedure leads us to the underlying graph structure. The utilization of DNNs allows us to take advantage of the non-linearity characteristics of neural networks in modeling the observed graph signals. In addition, since the DNNs are considered as general approximators, there is no need to make any prior assumptions about the distribution of the observed graph signals. Experiments with synthetic and real-world datasets demonstrate that the proposed method can infer meaningful graph structures from observed graph signals.

Understanding homeowners' energy-efficiency retrofit (EER) decision-making is a critical priority for reducing the adverse environmental impacts of the building sector and promoting a sustainable consumption transition. Existing research lacks attention to the dynamics and social interactions in the decision-making process of homeowner EER adoption. This paper applies the complex network-based evolutionary game approach with agent-based modeling to construct an evolutionary dynamics model for homeowners’ EER adoption decision-making. Through simulation experiments, this paper examines the effects of various key factors, including government incentives, retrofit costs, retrofit uncertainty, and network size, on the evolution of EER adoption. The results suggest that government incentives facilitate EER adoption, but their effects require a sufficiently long period of policy implementation and extensive social interaction to be realized. Reducing retrofit costs is a robust and effective way to encourage EER adoption, especially when uncertainty is high. Retrofit uncertainty has a significant impact on the adoption evolution. Increased uncertainty can hinder adoption decisions. In particular, the combination of high uncertainty and incentives is prone to lead to incentive failure. The increase in network size contributes to EER adoption, but attention needs to be paid to the impact of potential incentive redundancy in large-scale networks.

Standing in others’ shoes is usually describing the phenomenon that individuals switch their position and think about others’ benefits. This common saying can also stimulate the cooperation behavior, no matter in natural system or human society. In fact, Scholars have conducted abundant of researches to explore human behaviors in evolutionary game theory to discover how to improve cooperation among individualist. Results clearly showed that players can achieve the highest payoff when they choose cooperation strategy. However, selfishness among individuals results in that cooperation is not guaranteed every time, and how to improve cooperative behavior still remains a challenge in literature. Nevertheless, we analyzed the notion of “Standing in others’ shoes” through mathematical method, and analyzed this idea by introducing evolutionary game theory. The results indicate that the cooperation can be promoted significantly when players take opponents’ payoff into account. Here, a parameter of u was introduced into the simulation process representing when different strategies are applied by the focal player x and its neighbor, the focal player x will calculate its own payoff at possibilities u, and with the possibilities of 1−u considering its neighbor yi’s payoff. The Monte Carlo simulation is conducted on spatial-lattice network, BA scale-free network and small-world network respectively. The results reveal that the frequency of cooperation can be improved dramatically when parameter u reached a certain threshold.

The limit of validity of ordinary statistical mechanics and the pertinence of Tsallis statistics beyond it is explained considering the most probable evolution of complex systems processes. To this purpose we employ a dissipative Landau–Ginzburg kinetic equation that becomes a generic one-dimensional nonlinear iteration map for discrete time. We focus on the Renormalization Group (RG) fixed-point maps for the three routes to chaos. We show that all fixed-point maps and their trajectories have analytic closed-form expressions, not only (as known) for the intermittency route to chaos but also for the period-doubling and the quasiperiodic routes. These expressions have the form of q-exponentials, while the kinetic equation’s Lyapunov function becomes the Tsallis entropy. That is, all processes described by the evolution of the fixed-point trajectories are accompanied by the monotonic progress of the Tsallis entropy. In all cases the action of the fixed-point map attractor imposes a severe impediment to access the system’s built-in configurations, leaving only a subset of vanishing measure available. Only those attractors that remain chaotic have ineffective configuration set reduction and display ordinary statistical mechanics. Finally, we provide a brief description of complex system research subjects that illustrates the applicability of our approach.

Traditional classification techniques usually classify data samples according to the physical organization, such as similarity, distance, and distribution, of the data features, which lack a general and explicit mechanism to represent data classes with semantic data patterns. Therefore, the incorporation of data pattern formation in classification is still a challenge problem. Meanwhile, data classification techniques can only work well when data features present high level of similarity in the feature space within each class. Such a hypothesis is not always satisfied, since, in real-world applications, we frequently encounter the following situation: On one hand, the data samples of some classes (usually representing the normal cases) present well defined patterns; on the other hand, the data features of other classes (usually representing abnormal classes) present large variance, i.e., low similarity within each class. Such a situation makes data classification a difficult task. In this paper, we present a novel solution to deal with the above mentioned problems based on the mesostructure of a complex network, built from the original data set. Specifically, we construct a core–periphery network from the training data set in such way that the normal class is represented by the core sub-network and the abnormal class is characterized by the peripheral sub-network. The testing data sample is classified to the core class if it gets a high coreness value; otherwise, it is classified to the periphery class. The proposed method is tested on an artificial data set and then applied to classify x-ray images for COVID-19 diagnosis, which presents high classification precision. In this way, we introduce a novel method to describe data pattern of the data “without pattern” through a network approach, contributing to the general solution of classification.

The influence maximization problem has attracted increasing attention in previous studies. Recent years have witnessed an enormous interest in the modeling, performance evaluation, and seed determination in different networked systems. Further, the competitive behavior between multiple influential groups within a network is developed to simulate the realistic marketing and propagation tasks. Powerful seeds can be detected to achieve considerable diffusion effects. Meanwhile, networks are operated in the presentence of disturbances, and the connectivity tends to be threatened by attacks and errors. Seeds with robustness against structural perturbances in competitive networks are significant for daily applications. However, little attention has been paid on evaluating the robustness of seeds towards competitive spreading scenarios, and an effective determination strategy is still lacked. In order to tackling the robust competitive influence maximization problem, a diffusion model considering competitive behaviors between spreading groups has been developed, and the spreading ability estimation technique is also given. Equipped with which, a robustness measure RCS is designed to evaluate the robustness of seeds under node-based attacks in a numerical form. The seed determination task has been modeled as a discrete optimization problem, and a Memetic algorithm containing several problem-orientated operators, termed MA-RCIM, is devised to solve the problem. Tested on several synthetic and real-world networks, MA-RCIM achieves satisfactory results for solving diffusion dilemmas, and shows superiority over existing approaches.

Stream gauge clustering enables diverse studies, such as the analysis of spatial patterns and the physical reasons for these patterns. The clustering of monitoring gauges through complex networks combined with community detection algorithms is a strong alternative to classical methods. However, this approach involves several particularities that impact the clustering results, and the non-stationarity commonly present in streamflow series is typically not accounted for in studies that use this methodology. This article presents the application of a new framework developed to cluster stream gauge stations and to analyse changes in the clustering results across time. Weighted networks were created through Mutual Information combined with an automated threshold. Complex networks were obtained for the entire series and for sliding windows to investigate if significant differences occur across time. A more in-depth analysis was carried out with selected time windows through the perspective of Network Science and Complex Network Analysis, and the results were compared to those from a classic clustering approach. The framework provided robust and physically coherent clustering results and a more detailed clustering result than the classic approach. The weighted networks construction procedure successfully diminished time delimited effects induced by non-stationarity. However, the sliding windows networks results demonstrated that significant changes occurred across time, and three different community configurations were obtained. These results indicate that the use a single network can result in a misrepresentation of local characteristic and lead to wrong conclusions. The communities’ evolution across time showed spatial-time coherence with both the physical phenomenology of the study area and previous studies. The changes observed were associated with phase shifts of low-frequency sea surface temperature oscillations of both the Atlantic and the Pacific Oceans through the phase shifts’ direct and indirect influence on the South Atlantic Convergence Zone. The in-depth analysis resulted in the identification of a spatially coherent transition zone within the clusters obtained and attested the reliability of the framework results based on network typologies. Thus, the proposed framework can aid in clustering problems and provide better comprehension of local characteristics. Although the framework was developed for stream gauge clustering, its use can be extended to the clustering of any data with nonlinear and nonstationary characteristics.

ResearchGate has not been able to resolve any references for this publication.