Chapter

Mathematics of Networks

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The patterns of interactions, both economic and otherwise, between individuals, groups or corporations form social networks whose structure can have a substantial effect on economic outcomes. The study of social networks and their implications has a long history in the social sciences and more recently in applied mathematics and related fields. This article reviews the main developments in the area with a focus on practical applications of network mathematics.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... It is perhaps no coincidence that the star network which has a high degree centrality (See [15]) demonstrates tail risk. A generalization of degree centrality is eigenvector centrality. ...
... Here the last inequality follows again from Lemma 4. Using a similar upper bound for each of the summand in Eq. (15) we get tracepppA `Λ q k q T pA `Λ q k q ď pk`1q˜k ÿ j"0ˆk j˙2 p1´ q 2pk´jq pγ q 2j¸t racepP q ď pk`1q˜k ÿ j"0ˆ2 k 2j˙p 1´ q 2pk´jq pγ q 2j¸t racepP q ď pk`1qp1´ ` γq 2k tracepP q tracepP pA qq ď 8 ÿ k"0 kp1´ ` γq 2k tracepP q tracepP pA qq " OptracepP pAqqq The proof relies onˆk j˙2 ďˆ2 k 2jṪ his is a simple combinatorial identity which has been proved in Proposition 8. ...
Preprint
This paper examines the dependence of network performance measures on network size and considers scaling results for large networks. We connect two performance measures that are well studied, but appear to be unrelated. The first measure is concerned with energy metrics, namely the \Hcal_2--norm of a network, which arises in control theory applications. The second measure is concerned with the notion of "tail risk" which arises in economic and financial networks. We study the question of why such performance measures may deteriorate at a faster rate than the growth rate of the network. We first focus on the energy metric and its well known connection to controllability Gramian of the underlying dynamical system. We show that undirected networks exhibit the most graceful energy growth rates as network size grows. This rate is quantified completely by the proximity of spectral radius to unity or distance to instability. In contrast, we show that the simple characterization of energy in terms of network spectrum does not exist for directed networks. We demonstrate that, for any fixed distance to instability, energy of a directed network can grow at an exponentially faster rate. We provide general methods for manipulating networks to reduce energy. In particular, we prove that certain operations that increase the symmetry in a network cannot increase energy (in an order sense). Secondly, we focus on tail risk in economic and financial networks. In contrast to \Hcal_2--norm which arises from computing the expectation of energy in the network, tail risk focuses on tail probability behavior of network variables. Although the two measures differ substantially we show that they are precisely connected through the system Gramian. This surprising result explains why topology considerations rather than specific performance measures dictate the large scale behavior of networks.
... Based on the subordination of items represented by the nodes, the nodes were organized into distinct item groups. The edges connecting the nodes represent the partial correlation coefficients between the items (34). An existing edge between two Frontiers in Neurology 04 frontiersin.org ...
... Where a positive weight suggests a positive correlation, negative weights indicate a contrary relationship. The intensity of this correlation is precisely embodied by the size of the weight's absolute value (34). This becomes clear in an example: When symptoms of C2-F5 appear, the emergence of C4-F7 symptoms is concurrent. ...
Article
Full-text available
Background Disease-related fear among patients with epilepsy has significantly impacted their quality of life. The Disease-Related Fear Scale (D-RFS), comprising three dimensions, serves as a relatively well-established tool for assessing fear in these patients. However, certain problems potentially exist within the D-RFS’s attribution of items, and its internal structure is still unclear. To establish an appropriate dimensional structure and gain deeper comprehension of its internal structure—particularly its core variables—is vital for developing more effective interventions aimed at alleviating disease-related fear among patients with epilepsy. Methods This study employed a cross-sectional survey involving 609 patients with epilepsy. All participants underwent assessment using the Chinese version of the D-RFS. We used exploratory network analysis to discover a new structure and network analysis to investigate the interrelationships among fear symptom domains. In addition to the regularized partial correlation network, we also estimated the node and bridge centrality index to identify the importance of each item within the network. Finally, it was applied to analyze the differences in network analysis outcomes among epilepsy patients with different seizure frequencies. Results The research findings indicate that nodes within the network of disease-related fear symptoms are interconnected, and there are no isolated nodes. Nodes within groups 3 and 4 present the strongest centrality. Additionally, a tight interconnection exists among fear symptoms within each group. Moreover, the frequency of epileptic episodes does not significantly impact the network structure. Conclusion In this study, a new 5-dimension structure was constructed for D-RFS, and the fear of disease in patients with epilepsy has been conceptualized through a network perspective. The goal is to identify potential targets for relevant interventions and gain insights for future research.
... Based on the subordination of items represented by the nodes, the nodes were organized into distinct item groups. The edges connecting the nodes represent the partial correlation coefficients between the items (34). An existing edge between two Frontiers in Neurology 04 frontiersin.org ...
... Where a positive weight suggests a positive correlation, negative weights indicate a contrary relationship. The intensity of this correlation is precisely embodied by the size of the weight's absolute value (34). This becomes clear in an example: When symptoms of C2-F5 appear, the emergence of C4-F7 symptoms is concurrent. ...
Article
Full-text available
Background Non-suicidal self-injury (NSSI) and depression often co-occur among adolescents with more severe clinical symptoms. This study examined the network structures of NSSI and depressive symptoms in adolescents. Methods Participants were recruited in the psychiatric outpatient clinics of three tertiary hospitals between April 10 and July 10, 2023. All participants been already found with self-injury behaviors in outpatient when enrolled. NSSI diagnostic criteria and Patient Health Questionnaire-9 (PHQ-9) were utilized to collect NSSI and depressive symptoms separately. We performed a network analysis to visualize the correlation between each symptom and to identify core and bridging symptoms in comorbidities. Results A total of 248 patients were enrolled in the study, with a mean age of 15.48 (SD = 1.62). Based on the PHQ-9 scores and grades, our results showed that the incidence of depression in adolescents with non-suicidal self-injury behavior was relatively high (N=235, 94.76%), with the majority having severe depression. The network analysis revealed that nodes D-6 “feeling bad, failing or letting yourself or your family down”, D-1 “little interest or pleasure” and D-4 “feeling tired” were the most vital and most central symptoms. The most crucial bridging symptom is the node NSSI-8 “frequent thinking about self-injury”, which connects the NSSI to the depression comorbid network. Conclusion This study offers a significant symptom-level conceptualization of the association between NSSI and depressive symptoms in a clinical sample of adolescents, which not only enhances our understanding of the comorbid but also identifies potential treatment targets to prevent and treat comorbidity between adolescent NSSI and depression.
... Equation (7) shows that x is an eigenvector of the adjacency matrix corresponding with eigenvalue λ. Understanding the aim of the centralities to be non-negative, λ must be the largest eigenvalue of the adjacency matrix with x representing the corresponding eigenvector (according to the Perron-Frobenius theorem) 25 . ...
... Eccentricity is a measure that represents the shortest distance, or geodesic path, from a specific node to the farthest node in the network 20 . The largest eccentricity of a network is referred to as the diameter of the network 25 . Modularity represents the number of edges, up to a multiplicative constant, that fall within groups minus the expected number of edges in an equivalent network with the edges being placed at random 26 . ...
Article
Full-text available
Healthcare resources are published annually in repositories such as the AHA Annual Survey DatabaseTM. However, these data repositories are created via manual surveying techniques which are cumbersome in collection and not updated as frequently as website information of the respective hospital systems represented. Also, this resource is not widely available to patients in an easy-to-use format. Network analysis techniques have the potential to create topological maps which serve to aid in pathfinding for patients in their search for healthcare services. This study explores the topological structure of forty United States academic health center websites. Network analysis is utilized to analyze and visualize 48,686 webpages. Several elements of network structure are examined including basic network properties, and centrality measures distributions. The Louvain community detection algorithm is used to examine the extent to which these techniques allow identification of healthcare resources within networks. The results indicate that websites with related healthcare services tend to form observable clusters useful in mapping key resources within a hospital system.
... 1) Centrality Measures: It determines the importance of the nodes in a network. The most widely used centralities are degree centrality, eigenvector centrality [34], closeness centrality [35], betweenness centrality [36]. Each centrality has its different applications e.g., in a telecommunications network [19], a node with higher betweenness centrality would have more control over the network since more information will pass through that node. ...
Article
Full-text available
In real-world networks, such as social networks, the occurrence of random failures or external attacks on network components can lead to the discontinuation of information flow in the network. Hence, ensuring the existence and robustness of these networks is a crucial concern these days. To address this problem, one possible solution is to redesign the connection between individual nodes in the network using some budget-constrained rewiring operations. Multiple rewiring operations help in building trust between individuals and strengthening their connections, further enhancing overall connectivity and information flow in the network. The effectiveness of rewiring approaches is determined by analyzing some important facts, such as the rewiring cost, adherence to budget constraints, and enhancement in network connectivity. To assess the efficacy of rewiring approaches, the rewiring cost has been regularly updated after each rewiring operation by measuring the evolution of the size of the giant component in the social network. The performance of the proposed budget-constrained rewiring approaches is evaluated against the random rewiring approach. The results show the superiority of the proposed approaches on the benchmark real-world dataset. By leveraging these budget-constrained rewiring approaches, it becomes possible to design social systems that exhibit enhanced stability and reliability of social networks in the face of various disruptions.
... A node's importance increases if it is connected to other significant nodes. The eigenvector centrality of node i is equal to the i -th element in the eigenvector that corresponds to the biggest eigenvalue in the adjacency matrix [50]. The node strength metric determines how tightly a node is directly connected to other nodes in the network by sum up all the absolute edge weights connected to it. ...
Article
Full-text available
There have been studies previously the neurobiological underpinnings of personality traits in various paradigms such as psychobiological theory and Eysenck’s model as well as five-factor model. However, there are limited results in terms of co-clustering of the functional connectivity as measured by functional MRI, and personality profiles. In the present study, we have analyzed resting-state connectivity networks and character type with the Lowen bioenergetic test in 66 healthy subjects. There have been identified direct correspondences between network metrics such as eigenvector centrality (EC), clustering coefficient (CC), node strength (NS) and specific personality characteristics. Specifically, N Acc L and OFCmed were associated with oral and masochistic traits in terms of EC and CC, while Insula R is associated with oral traits in terms of NS and EC. It is noteworthy that we observed significant correlations between individual items and node measures in specific regions, suggesting a more targeted relationship. However, the more relevant finding is the correlation between metrics (NS, CC, and EC) and overall traits. A hierarchical clustering algorithm (agglomerative clustering, an unsupervised machine learning technique) and principal component analysis were applied, where we identified three prominent principal components that cumulatively explain 76% of the psychometric data. Furthermore, we managed to cluster the network metrics (by unsupervised clustering) to explore whether neural connectivity patterns could be grouped based on combined average network metrics and psychometric data (global and local efficiencies, node strength, eigenvector centrality, and node strength). We identified three principal components, where the cumulative amount of explained data reaches 99%. The correspondence between network measures (CC and NS) and predictors (responses to Lowen’s items) is 62% predicted with a precision of 90%.
... As such, Pajek 5.18 software was used to analyze the network's centrality. Newman (2008) proposed three types of centrality: degree centrality, closeness centrality, and betweenness centrality, each of which measures the importance of a node. Degree centrality assumes that a node's influence increases with the number of other nodes it can directly affect. ...
Article
Full-text available
The aim of the present study was to quantitatively analyze the importance of each risk factor in a food safety event, so as to fully elucidate the correlation between different risk factors and provide a reference for food safety governance. Text mining and complex network analysis methods were utilized to explore the causal mechanism of food safety incidents. By performing text mining on food safety event news reports, 15 major risk factors were identified based on high-frequency words. A causal network for food safety accidents was then constructed using strong association rules among these factors. Through network centrality analysis, the five core factors of food safety incidents and their associated sets were clarified. Based on text mining of 6,282 cases of food safety incidents reported by online media, 168 keywords related to food risk factors were extracted and further categorized into 15 types of food safety risk factors. Network analysis results revealed that microbial infection emerged as the most critical risk factor, with its associated sets including biotoxins and parasites, counterfeiting or fraud, processing process issues, and non-compliance with quality indicators.
... Status is commonly measured by eigenvector centrality (Ballinger et al., 2016;Bonacich & Lloyd, 2001;Conti & Graham, 2020;Katz, 1953;Knoke & Burt, 1983;Wasserman & Faust, 1994). Another core concept in network theories, eigenvector centrality quantifies the importance of a node (a participant) within a network by accounting for both the number and quality of its direct connections (Bonacich, 1972;Newman, 2008;Page et al., 1998). That is, being connected to well-connected others (also of high eigenvector centrality) is more valuable than being connected to poorlyconnected others. ...
Article
Full-text available
Firms increasingly leverage idea markets, where participants (such as employees) generate, improve, and evaluate ideas on a collaborative digital platform. Different participants contribute differently to the ideation process, some generating high quality ideas while others initiating discussion threads and commenting on the ideas to further enhance the ideas’ quality. Such diverse contributions may be importantly influenced by the participants’ diverse social capital— resource access and status —in their pre-existing network. We theorize this relationship and further test our hypotheses by conducting two idea market studies, one involving only a firm’s employees (Study 1: closed innovation) and the other further incorporating non-employees (Study 2: open innovation). We show that the higher quality ideas are generated by the participants with greater resource access , whereas continued engagement , including contributing larger quantities of ideas, discussion threads, and comments, stems from those with higher status . These findings have important implications for ideator recruitment and idea market design.
... All nodes in the network are given relative scores based on the idea that a node's score is influenced more by connections to high-scoring nodes than by the same connections to lowscoring nodes. When a node has a high eigenvector score, it indicates that it is linked to numerous other nodes that also have high scores [30,31]. Eigenvector centrality was first defined generally in networks based on social connections by [32] and was subsequently popularized by [33] as an essential measure in link analysis. ...
Preprint
Full-text available
Graphs are increasingly used in research, industry, and government. This has led to a wide range of analytical and graph-processing tools. There are various tools and platforms for graph processing. Many network based platforms, tools and storage systems have been presented over time. Each system evaluates its effectiveness and usability in processing graph data using different statistics and criteria. As a result, comparing the various systems' performances becomes challenging. This study benchamark popular network analysis tools— NetworkX, RustworkX, Igraph, EasyGraph, and Graph-tool— by evaluating their performance and extracted community engagement metrics, such as the number of downloads, stars, and forks. We benchmark the library's performance on three open-source datasets, one custom dataset, and twelve network analysis methods. The findings reveal that while NetworkX is highly popular, it exhibits slower performance in most benchmarks compared to Graph-tool and Igraph, which are faster and more efficient despite their lower popularity. The continued popularity of NetworkX may be attributed to factors like well-documented methods and a user-friendly API, though this warrants further investigation. This research provides valuable insights for practitioners, researchers, and developers, helping them make informed decisions when selecting network analysis tools. The study emphasizes the trade-off between user-friendliness and performance, suggesting that the optimal tool choice depends on project-specific requirements.
... Eigenvector centrality is used to quantify the influence of a node within networks [26]. The relative centrality score of node , , is defined as ...
Article
Full-text available
Over the past three decades, there has been a noticeable growth in both the quantity and quality of scientific research in India. In recent years, India’s growing prominence on the global map of research productivity has become highly visible. Numerous scientometrics studies have been reported for various fields in India such as computer science, nanoscience, nanotechnology, artificial intelligence, solar cells, and dentistry among others. However, there is a lack of scientometric research in the domain of mathematics within India, despite its crucial role in propelling advancements across various disciplines. Furthermore, research collaboration has emerged as an important factor in accelerating the progress of mathematics research in a country since the 20th century. Therefore, studying collaboration trends becomes an essential component of scientometrics. In this paper, we comprehensively analyze the state of mathematics research in India, including collaboration trends, using methods from scientometrics and complex network analysis. Scientometrics offers an overview of the nature of mathematics research being undertaken, while complex network analysis reveals the dynamics and structural variation of research collaborations at the country and institutional level across various temporal periods. The findings provide insights into the development and collaboration trends of mathematics research in India from 2001 to 2021. There has been an exponential increase in publications since 2015, with approximately 20% of mathematics research conducted in India appearing to be associated with physics research. In terms of research collaborations, there has been a notable increase in collaborations between India and several countries including the USA, China, Saudi Arabia, and Turkey. However, an analysis of institutional collaboration networks suggests that these collaborations tend to be small-scale research.
... There exist various eigenvalues λ for a non-zero eigenvector solution. There is a unique and the largest eigenvalue, which results in the desired centrality measure [76]. EC measurements for the graph of CTI are provided in Table 10. ...
Preprint
Full-text available
Advanced persistent threat (APT) attacks are sophisticated and organized attacks commonly motivated by political, financial, and strategic objectives. In order to comprehend their tactics, techniques, and procedures (TTP) and indicators, APT reports are valuable sources. While blue teams typically rely on server logs, firewall rules and user authorizations managed in database tables, attackers have a graph-based mindset. In this work, we propose a framework for discovering and evaluating APTs using graph-based algorithms. Cyber threat intelligence (CTI) was extracted from 40,358 pages of APT reports and transformed into a graph. Centrality, community, and similarity analyses were executed on the graph. As a result, critical and influential APT groups and indicators of compromise (IoC) were discovered. Similar attacks and APT groups were revealed. Analysis results were interpreted to create new strategic CTI that can be utilized in future security operations.
... Eigenvector centrality is a measure for calculating the influence of a node within the network based on its connections to other nodes (Wei et al., 2011). In contrast to other centrality measures, it does not consider all edges as equal, and instead depends on the number as well as the quality of a node's connections (Newman, 2008). Prominence is a measure proposed by Hoffman et al. (2014) for indicating a concept's influence and is obtained by calculating the mean of occurrence probability and eigenvector centrality. ...
Article
Full-text available
Mediterranean agriculture is increasingly threatened by soil degradation and climate change. Conservation agriculture (CA) is a farming approach characterized by reduced soil disturbance, soil cover, and crop rotation that provides agronomic, economic, and environmental benefits to farmers, but which is not yet widespread in the Mediterranean region. To investigate the sociocultural aspects of CA adoption, we examined farmers’ understandings of ‘good soil management’ and a ‘good farmer’ identity. We employed network analysis to visualize and compare farmers’ mental models of these concepts and how they differed according to farmers’ tillage practices. We found that crop rotation is a prominent concept cognitively tied to fertilizer application, bridging conventional and reduced tillage practices. CA farmers’ mental models of soil management are more complex than conventional farmers’. Demonstrating productivity and having experience and knowledge were the most prominent aspects of farmers’ understanding of a ‘good farmer’. For CA farmers, this was tied to environmental responsibility and innovation, whereas for conventional farmers, a set of best practices including tillage and the use of mineral fertilizers, was valued more highly. CA may compete with held understandings concerning soil management among conventional farmers. CA adoption programmes could be better tailored to align with their cultural values.
... Network analysis is increasingly developing as evidenced by the birth of software that can be used to make it easier to analyze a social network. Social network analysis can be visualized in two ways, namely by using matrices and using graphs [15]- [19]. ...
Article
Full-text available
Determining location is a problem encountered in many areas of everyday life. Determining this location becomes complex when many factors are taken into consideration, where one factor and another can conflict. One of the issues that will be discussed in this research is determining the location of the rice processing industry. This problem needs to be studied considering that the opportunity to establish a rice industry in several regions in Indonesia is very high considering that the potential for the main raw material (rice) is very large, as is the rice market where the largest population of the Indonesian nation makes rice the main food ingredient. The case study that will be discussed in this research is the rice processing industry in Batu Bara Regency. Batu Bara Regency is one of the districts/cities in North Sumatra Province which has enormous rice potential. In order to improve the welfare of farmers and provide good quality rice for the community, a Rice Processing Industry is needed. The problem in the rice processing industry is very complex, apart from the seasonal availability of rice raw materials, is the location where the rice processing factory is located. Various criteria need to be considered in this determination relating to the distance of the rice processing industry factory to each source of the main raw material (rice). Determining this location is very complex because it involves multiple criteria. This research will provide a location determination model using a Multi Criteria Network. The criteria used to determine the location of the rice processing industry in Batu Bara Regency are the transportation distance from the raw material center, travel time, and the condition of the road connecting the raw material center to the factory location. Based on the criteria for distance between rice centers and rice processing industrial factories, a route with a total minimum distance score of 0.71 is obtained. Meanwhile, based on travel time, the best route was obtained with a minimum total time score of 1.61. Based on the quality of the road taken, the best route was obtained with a maximum total score of 0.25. By considering the three criteria, the best route was obtained with a total optimum score of 2.96.
... where A represents the adjacency matrix of graph G, and λ is the eigenvalue obtained using the Perron-Frobenius theorem. The solution X to this system is unique, and its entries are positive if λ is the largest eigenvalue of A (Newman, 2008). ...
Article
Full-text available
Contemporary spatial statistics studies often underestimate the complexity of road networks, thereby inhibiting the strategic development of effective interventions for car accidents. In response to this limitation, the primary objective of this study is to enhance the spatiotemporal analysis of urban crash data. We introduce an innovative spatial-temporal weight matrix (STWM) for this purpose. The STWM integrates external covariates, including road network topological measurements and economic variables, offering a more comprehensive view of the spatiotemporal dependence of road accidents. To evaluate the functionality of the presented STWM, random effect eigenvector spatial filtering analysis is employed on Boston's traffic accident data from January to March 2016. The STWM improves analysis, surpassing distance-based SWM with a lower residual standard error of 0.209 and a higher adjusted R2 of 0.417. Furthermore, the study emphasizes the influence of road length on crash incidents, spatially and temporally, with random standard errors of 0.002 for spatial effects and 0.026 for non-spatial effects. This is particularly evident in the north and center of the study area during specific periods. This information can help decision-makers develop more effective urban development models and reduce future crash risks.
... The centrality scores are obtained by solving this matrix equation. It can be shown that, while there can be many values for λ, only the largest value will result in positive scores for all nodes [26]. ...
Article
Full-text available
The topology of transportation networks such as road and rail networks determines the efficiency and effectiveness of the corresponding transport systems. Quantifying the relative importance of nodes of such networks is vital to understand their dynamics. Centrality metrics which are used in network science often make the assumption that only the shortest paths contribute to the importance of the nodes. In traffic scenarios however, while most traffic would preferentially go though paths of least cost, paths which are costlier are not omitted entirely. In this work, we introduce a new centrality metric, transportation centrality, which considers all paths that go through a node, and uses Logit functions and path lengths to compute the traffic which goes through each path, which in turn is used in centrality calculation. Therefore, this metric can be calculated based on topology alone, while it can also utilise traffic data if this is available. We demonstrate the utility of this new centrality metric by considering the suburban transportation networks of Seoul and Delhi. We also analyse the influence of the sensitivity parameter of the Logit function in the calculation of transportation centrality. We demonstrate that the introduced centrality metric is useful in understanding the relative importance of nodes in transportation networks, including networks for which no traffic data is available.
... The eigenvector centrality parameter summarizes in a unique value the number of interactions and their quality. High values indicate that a particular node is connected to several nodes which themselves have high scores (Newman, 2018). Lastly, the node degree represents the total number of interactions (edges) related to a particular node. ...
... The eigenvector centrality gives the connections to the already wellconnected nodes a higher value. (Newman, 2008) Betweenness centrality Binary betweenness of node i ( ) is de ned as the proportion of the number of shortest paths between nodes s and t that pass-through node i ( ) to the total number of shortest paths existing between s and t ( ). (Freeman, 1978) Weighted betweenness centrality ...
Preprint
Full-text available
Air cargo transport is a crucial component of the transportation industry, yet it has not been studied as comprehensively as passenger transport. This study is aimed at analysing the structure of Europe's air cargo network with a particular focus on the geography component of PEG analysis (Politics, Economics, and Geography). The study compares pre-Covid 2019 and pandemic-affected 2020 air cargo data. Firstly, it compares the cargo transport performances of European airports and cargo transport routes in these two years. Then, it identifies the structure of the European air cargo network using complex network theory. The study examines 253 nodes (airports) and 1033 edges (routes) in 41 European countries in 2019, and 228 nodes and 936 edges in 2020. The study reveals influential airports for air cargo transportation in Europe, as well as airports that gained or lost importance during the pandemic. A significant finding is that hub-and-hub connections are as important as hub-and-spoke connections in Europe's air cargo transport. The relevance of geographical proximity in assessing transfer hub performance is signified by employing a weighted betweenness measure that employs traffic to distance ratio. Additionally, the study concludes that modularity maximization does not seem to reveal meaningful communities in Europe's air cargo network. Overall, the study provides valuable insights into the structure and dynamics of Europe's air cargo network and highlights areas that warrant further investigation.
... Eigenvector centrality [45] measures a node's importance, in this case, encoded as residue conservation, but also integrates the importance of surrounding nodes. Therefore, a node connected to more highly conserved nodes receives a higher centrality score [46]. Eigenvector centrality scores are calculated for each residue in the TAC graph. ...
Article
Full-text available
The identification of protein surfaces required for interaction with other biomolecules broadens our understanding of protein function, their regulation by post-translational modification, and the deleterious effect of disease mutations. Protein interaction interfaces are often identifiable as patches of conserved residues on a protein’s surface. However, finding conserved accessible surfaces on folded regions requires an understanding of the protein structure to discriminate between functional and structural constraints on residue conservation. With the emergence of deep learning methods for protein structure prediction, high-quality structural models are now available for any protein. In this study, we introduce tools to identify conserved surfaces on AlphaFold2 structural models. We define autonomous structural modules from the structural models and convert these modules to a graph encoding residue topology, accessibility, and conservation. Conserved surfaces are then extracted using a novel eigenvector centrality-based approach. We apply the tool to the human proteome identifying hundreds of uncharacterised yet highly conserved surfaces, many of which contain clinically significant mutations. The xProtCAS tool is available as open-source Python software and an interactive web server.
Preprint
Full-text available
With the recent advancements in social network platform technology, an overwhelming amount of information is spreading rapidly. In this situation, it can become increasingly difficult to discern what information is false or true. If false information proliferates significantly, it can lead to undesirable outcomes. Hence, when we receive some information, we can pose the following two questions: (i) Is the information true? (ii) If not, who initially spread that information? % The first problem is the rumor detection issue, while the second is the rumor source detection problem. A rumor-detection problem involves identifying and mitigating false or misleading information spread via various communication channels, particularly online platforms and social media. Rumors can range from harmless ones to deliberately misleading content aimed at deceiving or manipulating audiences. Detecting misinformation is crucial for maintaining the integrity of information ecosystems and preventing harmful effects such as the spread of false beliefs, polarization, and even societal harm. Therefore, it is very important to quickly distinguish such misinformation while simultaneously finding its source to block it from spreading on the network. However, most of the existing surveys have analyzed these two issues separately. In this work, we first survey the existing research on the rumor-detection and rumor source detection problems with joint detection approaches, simultaneously. % This survey deals with these two issues together so that their relationship can be observed and it provides how the two problems are similar and different. The limitations arising from the rumor detection, rumor source detection, and their combination problems are also explained, and some challenges to be addressed in future works are presented.
Article
Determining the form of chronic disorder of consciousness is a challenging task in clinical practice, highlighting the need for objective diagnostic methods to assess the depth of impairment of consciousness. This study contributes to the development of more effective diagnostic and prognostic algorithms and biomarkers for patient management. Resting-state functional MRI data from 15 patients and 12 healthy controls were acquired using a 1.5 T system and processed using the statistical parametric mapping package. A connectivity matrix was constructed to assess brain connectivity, and statistical analyses including permutation tests, network-based statistics, and geodesic distance were used to identify significant differences in network measures between controls and patients. Our results revealed significant differences between the functional networks of the patient and control groups at both local and global levels, with altered metrics of node strength, clustering, and global efficiency. Notably, subcortical structures such as the thalamus, caudate nucleus, and raphe dorsalis nucleus showed disruptions in patients, consistent with the role of these regions in the basal ganglia-thalamo-cortical circuit. Our findings provide insight into the complex problem of how information is processed in the functional brain network in chronic disorders of consciousness, beyond the mere localization of functions.
Chapter
In this chapter, we shall present various definitions and theories useful for the understanding of the contents of the structures of asymmetric interactions.
Article
In this study, the impact of sports and various variables on the expected trust and ethical intelligence levels of university students was investigated. The research was conducted using a general survey model. Data collection instruments included a "Personal Information Form" developed by researchers, along with the "Expected Trust Scale" and "Ethical Intelligence Scale." Data were analyzed using SPSS 26.0 software. Independent sample t-tests were employed for binary group comparisons based on the number of categories of the independent variable, while one-way Analysis of Variance (ANOVA) with post-hoc LSD tests was used for multiple groups. The relationship between the two scales was examined using Pearson correlation analysis. The analysis results were interpreted at a significance level of 0.05. In the study, it was observed that university students' expected trust and ethical intelligence attitudes were influenced by variables such as age, faculty of study, participation in sports, weekly duration of sports activity, type of sport, and age at which sports were started. Additionally, significant positive correlations were found between participants' levels of "Expected Trust" and "Ethical Intelligence." In conclusion, it was determined that sports significantly contribute to both expected trust and ethical intelligence attitudes.
Thesis
Full-text available
This master thesis mainly examines the question whether Graph Neural Networks should be preferred over classic Multilayer Neural Networks for generating recommendations in Recommender Systems. We focus on generating recommendations using Collaborative Filtering, one of the two main branches within Recommender Systems. First, we describe methodological foundations. Then we introduce relevant models. We conduct simulations, which lead us to the conclusion that Graph Neural Networks should not always be considered as superior. In addition, our research shows advantages and disadvantages of Graph Neural Networks. Up to a certain size of the given graph, its structure can be exploited for Machine Learning. For the selected models, we observe an over-parameterisation and very long runtimes when processing large graphs.
Research
To quote this resource: Dorian DUTOIT-PROUST, “Relevance and Effectiveness of sui generis instruments to tackle exclusionary practices in the Digital Economy”, Competition Forum – Resources, 2024, n° 0015, https://competition-forum.com
Research Proposal
Full-text available
This study explores the intersection of artificial intelligence (AI) and environmental education through a comprehensive bibliometric analysis focused on sustainability-oriented learning practices. The transformative potential of AI in education includes adaptive learning, intelligent tutoring systems, and real-time analytics that personalize educational experiences and improve environmental awareness. These tools facilitate understanding complex ecological systems, encourage sustainable behavior, and promote global collaboration through augmented reality (AR) and virtual reality (VR) technologies. This study uses advanced ensemble community detection techniques in bibliometric networks to identify critical themes and influential publications in AI-enhanced environmental education. The findings highlight historical trends and critical innovations and provide insights into the evolving role of AI in shaping sustainability-oriented pedagogy. This work highlights the potential of AI to develop learner-centered, data-driven educational methods that improve societal resilience and environmental awareness.
Article
This study investigates the functional brain network in major depressive disorder using network theory and a consensus network approach. At the macroscopic level, we found significant differences in connectivity measures such as node strength and clustering coefficient, with healthy controls exhibiting higher values. This is consistent with disruptions in functional brain network segregation in patients with major depressive disorder. Consensus network analysis revealed that the central executive and salience networks were predom- inant in healthy controls, whereas depressed patients showed greater overlap with the default mode network. No differences were found in network efficiency measures, indicating comparable brain network integration between healthy controls and major depressive disorder groups. Importantly, the clustering coefficient emerged as an effective diagnostic biomarker for depression, achieving high sensitivity (90%), specificity (92%), and overall precision (90%). Further analysis at the mesoscale level uncovered unique functional connections distinguishing healthy controls and major depressive disorder groups. Our findings underscore the utility of analyzing functional networks from the macroscale to the mesoscale, and provide insight into overcoming the challenges associated with intersubject variability and the multiple comparisons problem in network analysis.
Article
Motivation Intracellular organelle networks (IONs) such as the endoplasmic reticulum (ER) network and the mitochondrial (MITO) network serve crucial physiological functions. The morphology of these networks plays a critical role in mediating their functions. Accurate image segmentation is required for analyzing the morphology and topology of these networks for applications such as molecular mechanism analysis and drug target screening. So far, however, progress has been hindered by their structural complexity and density. Results In this study, we first establish a rigorous performance baseline for accurate segmentation of these organelle networks from fluorescence microscopy images by optimizing a baseline U-Net model. We then develop the multi-resolution encoder (MRE) and the hierarchical fusion loss (ℓhf) based on two inductive components, namely low-level features and topological self-similarity, to assist the model in better adapting to the task of segmenting IONs. Empowered by MRE and ℓhf, both U-Net and Pyramid Vision Transformer (PVT) outperform competing state-of-the-art models such as U-Net ++, HR-Net, nnU-Net, and TransUNet on custom datasets of the ER network and the MITO network, as well as on public datasets of another biological network, the retinal blood vessel network. In addition, integrating MRE and ℓhf with models such as HR-Net and TransUNet also enhances their segmentation performance. These experimental results confirm the generalization capability and potential of our approach. Furthermore, accurate segmentation of the ER network enables analysis that provides novel insights into its dynamic morphological and topological properties. Availability and Implementation Code and data are openly accessible at https://github.com/cbmi-group/MRE. Supplementary Information Supplementary information is available at Bioinformatics online.
Article
Full-text available
Distributed intelligence systems (DIS) containing natural and artificial intelligence agents (NIA and AIA) for decision making (DM) belong to promising interdisciplinary studies aimed at digitalization of routine processes in industry, economy, management, and everyday life. In this work, we suggest a novel quantum-inspired approach to investigate the crucial features of DIS consisting of NIAs (users) and AIAs (digital assistants, or avatars). We suppose that N users and their avatars are located in N nodes of a complex avatar - avatar network. The avatars can receive information from and transmit it to each other within this network, while the users obtain information from the outside. The users are associated with their digital assistants and cannot communicate with each other directly. Depending on the meaningfulness/uselessness of the information presented by avatars, users show their attitude making emotional binary “like”/“dislike” responses. To characterize NIA cognitive abilities in a simple DM process, we propose a mapping procedure for the Russell’s valence-arousal circumplex model onto an effective quantum-like two-level system. The DIS aims to maximize the average satisfaction of users via AIAs long-term adaptation to their users. In this regard, we examine the opinion formation and social impact as a result of the collective emotional state evolution occurring in the DIS. We show that generalized cooperativity parameters GiGiG_i, i=1,⋯,Ni=1,,Ni=1,\dots,N introduced in this work play a significant role in DIS features reflecting the users activity in possible cooperation and responses to their avatar suggestions. These parameters reveal how frequently AIAs and NIAs communicate with each other accounting the cognitive abilities of NIAs and information losses within the network. We demonstrate that conditions for opinion formation and social impact in the DIS are relevant to the second-order non-equilibrium phase transition. The transition establishes a non-vanishing average information field inherent to information diffusion and long-term avatar adaptation to their users. It occurs above the phase transition threshold, i.e. at Gi>1Gi>1G_i>1, which implies small (residual) social polarization of the NIAs community. Below the threshold, at weak AIA–NIA coupling (Gi≤1Gi1G_i\le 1), many uncertainties in the DIS inhibit opinion formation and social impact for the DM agents due to the information diffusion suppression; the AIAs self-organization within the avatar–avatar network is elucidated in this limit. To increase the users’ impact, we suggest an adaptive approach by establishing a network-dependent coupling rate with their digital assistants. In this case, the mechanism of AIA control helps resolve the DM process in the presence of some uncertainties resulting from the variety of users’ preferences. Our findings open new perspectives in different areas where AIAs become effective teammates for humans to solve common routine problems in network organizations.
Article
How do migrant social networks matter for performance in the job? We examine this by constructing a nationality-based network of foreign newcomers when they first begin to play in the PGA TOUR and examine the impact of this initial social network on newcomers’ probability of surviving (i.e., keeping their license) at the end of their inaugural PGA TOUR season. We find that the migrant social network matters among the non-elite group of players in the second tier of the PGA TOUR, but not among the elite group of players in the first tier of the PGA TOUR. For the second-tier tour players, we find that density of connections within a nationality cluster has a sizable positive effect on newcomers’ probability of surviving, but no evidence that the centrality of a nationality cluster in the overall PGA TOUR network has an impact on survival.
Article
Full-text available
The objectively-defined subtle cognitive decline individuals had higher progression rates of cognitive decline and pathological deposition than healthy elderly, indicating a higher risk of progressing to Alzheimer’s disease. However, little is known about the brain functional alterations during this stage. Thus, we aimed to investigate the functional network patterns in objectively-defined subtle cognitive decline cohort. Forty-two cognitive normal, 29 objectively-defined subtle cognitive decline and 55 mild cognitive impairment subjects were included based on neuropsychological measures from the Alzheimer’s disease Neuroimaging Initiative dataset. Thirty cognitive normal, 22 objectively-defined subtle cognitive declines and 48 mild cognitive impairment had longitudinal MRI data. The degree centrality and eigenvector centrality for each participant were calculated by using resting-state functional MRI. For cross-sectional data, analysis of covariance was performed to detect between-group differences in degree centrality and eigenvector centrality after controlling age, sex and education. For longitudinal data, repeated measurement analysis of covariance was used for comparing the alterations during follow-up period among three groups. In order to classify the clinical significance, we correlated degree centrality and eigenvector centrality values to Alzheimer’s disease biomarkers and cognitive function. The results of analysis of covariance showed significant between-group differences in eigenvector centrality and degree centrality in left superior temporal gyrus and left precuneus, respectively. Across groups, the eigenvector centrality value of left superior temporal gyrus was positively related to recognition scores in auditory verbal learning test, whereas the degree centrality value of left precuneus was positively associated with mini-mental state examination total score. For longitudinal data, the results of repeated measurement analysis of covariance indicated objectively-defined subtle cognitive decline group had the highest declined rate of both eigenvector centrality and degree centrality values than other groups. Our study showed an increased brain functional connectivity in objectively-defined subtle cognitive decline individuals at both local and global level, which were associated with Alzheimer’s disease pathology and neuropsychological assessment. Moreover, we also observed a faster declined rate of functional network matrix in objectively-defined subtle cognitive decline individuals during the follow-ups.
Article
Network centrality is an intermediary between airport resource utilization and air traffic generation. A central position in the network with frequent and regular flights with hub nodes can boost air traffic by providing better accessibility, resulting in more efficient use of airport resources. However, this relationship has been largely ignored in the literature. Using data from the Turkish airport industry, this paper proposed a weight-restricted Network Data Envelopment Analysis model, which considers network centrality measures as the cornerstone intermediates that establish the link between airport resources and the traffic volume handled. In the first stage, called networkability, assets such as runways, terminals, aprons, and special purpose vehicles, and exogenous factors including population, socio-economic development, and tourist arrivals are used to accomplish the network integration with other airports, as measured by degree centrality, betweenness centrality, and eigenvector centrality. In the second stage, called traffic generation, this network integration allows for aircraft movements and workload unit to be handled. Criteria weights of model variables were calculated using Criteria Importance Through Intercriteria Correlation. The main findings indicate that 1) the weight-restriction procedure improved the robustness of Network DEA, 2) the proposed two-stage structure reveals whether performance losses are due to networkability or traffic generation capabilities and helps to identify the right policies for performance improvement, 3) the Turkish airports generally suffer from the inability to establish connections in the domestic network, 4) the pandemic has significantly improved the domestic networkability of airports due to mandatory direct flights while devastating the traffic generation capability, 5) low betweenness centrality is the main reason for weak networkability, and 6) good networkability may not ensure air traffic generation.
Article
Graph sparsification is a technique that approximates a given graph by a sparse graph with a subset of vertices and/or edges. The goal of an effective sparsification algorithm is to maintain specific graph properties relevant to the downstream task while minimizing the graph's size. Graph algorithms often suffer from long execution time due to the irregularity and the large real-world graph size. Graph sparsification can be applied to greatly reduce the run time of graph algorithms by substituting the full graph with a much smaller sparsified graph, without significantly degrading the output quality. However, the interaction between numerous sparsifiers and graph properties is not widely explored, and the potential of graph sparsification is not fully understood. In this work, we cover 16 widely-used graph metrics, 12 representative graph sparsification algorithms, and 14 real-world input graphs spanning various categories, exhibiting diverse characteristics, sizes, and densities. We developed a framework to extensively assess the performance of these sparsification algorithms against graph metrics, and provide insights to the results. Our study shows that there is no one sparsifier that performs the best in preserving all graph properties, e.g. sparsifiers that preserve distance-related graph properties (eccentricity) struggle to perform well on Graph Neural Networks (GNN). This paper presents a comprehensive experimental study evaluating the performance of sparsification algorithms in preserving essential graph metrics. The insights inform future research in incorporating matching graph sparsification to graph algorithms to maximize benefits while minimizing quality degradation. Furthermore, we provide a framework to facilitate the future evaluation of evolving sparsification algorithms, graph metrics, and ever-growing graph data.
Article
Full-text available
Recent scholarship contends that ancient Mediterranean economies grew intensively. An explanation is that Smithian growth was spurred by reductions in transaction costs and increased trade flows. This paper argues that an ancient Greek institution, proxenia , was among the key innovations that allowed such growth in the period 500–0 BCE. Proxenia entailed a Greek city-state declaring a foreigner to be its “public friend,” a status that conferred both duties and privileges. The functions performed by “public friends” could facilitate economic transactions between communities. Accordingly, network and regression analyses establish a strong relationship between proxenia grants and trade intensity.
Article
Full-text available
تتعدد من حولنا الظاهرات التي تنتظم في شكل شبكي، فعلاقاتنا الاجتماعية وأجهزتنا العصبية هي في الأصل شبكات، ولكنها ليست شبكات بسيطة، نتيجة ما تحمله من بيانات وما ينتج عن ضخامتها من عدم القدرة على التنبؤ بسلوكيات عناصرها، وفي الجغرافيا فإن النظام الشبكي هو بالفعل أحد خصائص الظاهرات المكانية، كشبكات الأودية والأنهار وشبكات النقل، وشبكات الطاقة، وشبكات الاتصالات، وغيرها، مما يستدعي تحليل التركيب الطوبولوجي لهذه الشبكات لفهم خصائصها وحل مشكلاتها. يتناول البحث خلفية نظرية متواضعة عن نظرية الشبكات المعقدة التي تعد تطورا لنظرية المخططات Graph Theory، حيث عرض البحث لتطور العلاقة بين الجغرافيا ونظريات تحليل الشبكات، ودور الجغرافيين في هذا المجال عبر الزمن، كما عرض لتعريف الشبكات المعقدة وأنماطها، والنماذج الرياضية المستخدمة في تحليل طبولوجيتها، ووضْع الشبكات المكانية بين الأنماط الرئيسية للشبكات المعقدة، حاولنا أيضا عرض وتعريب وتبسيط أهم مؤشرات تحليل الشبكات من منظور منهجية الشبكات المعقدة، وكذلك عرض أهم البرمجيات المتخصصة في تحليل الشبكات المعقدة اعتمادا على منهجية الشبكات الاجتماعية. اُختتم البحث بنظرة عامة حول مستقبل الدراسات الجغرافية في هذا الاتجاه وما يجب أن يقوم به الجغرافيين بشكل عام والجغرافيين العرب بشكل خاص في سبيل اللحاق بغيرهم في مجال التحليل الشبكي المعقد. Many phenomena are organized in a networked form around us. Our social relationships and our neural network are originally networks, however not a simple network as a result of the data they carry and the resulting inability to predict the behavior of their elements. In geography, the network system is indeed one of the characteristics of some spatial phenomena, such as networks of valleys, rivers, transportation, energy, and communication networks, and others, so the analysis of the topological structure of these networks to understand their characteristics and solve their problems is considered an important matter. This research deals with a modest theoretical background on complex network theory, which is a development of the Graph theory. The research presented the development of the relationship between geography and theories of network analysis, and the role of geographers in this field over time. It also presented the definition of complex networks and their patterns, the mathematical models used in analyzing their topologies, and the situation of spatial networks among the main patterns of complex networks. We also tried to view and simplify the most important indicators of network analysis from the perspective of the complex network methodology, as well as present the research for the most important software specialized in analyzing complex networks based on the social network analysis methodology. The research concluded with an overview of the future of geographical studies in this direction and what must be done by geographers in general and Arab geographers, in particular, to catch up with others in the complex network analysis field.
Chapter
Graph neural networks (GNNs) have many applications, including the medical field (e.g., neuroscience), thanks to their flexibility and expressiveness in handling unstructured graph-based data. However, one major issue in applying GNNs for medical applications is their unscalability in handling large and complex graphs, primarily due to their high memory usage. This high memory usage is controlled mainly by the data points’ interconnectivity and the neighbourhood expansion problem, which causes non-trivial mini-batching, unlike standard neural networks. While many recent publications are focusing on mini-batching GNN using sub-graph sampling/partitioning methods, there are many other directions to take in terms of reducing memory usage, such as quantisation and graph reduction methods. In this paper, we propose a novel topological graph contraction method using centrality measure and a memory-efficient GNN training framework (C-QSIGN), which incorporates our proposed contraction method along with several other state-of-the-art (SOTA) methods. Furthermore, we benchmarked our proposed model performance in terms of prediction quality and GPU usage against other SOTA methods. We show that C-QSIGN requires the least GPU memory compared to benchmark SOTA models (54% of SIGN and 18% of GCN on the Organ C dataset), allowing more efficient learning on complex, large-scale graphs. Our codes are publicly available at https://github.com/basiralab/CQSIGN.
Article
Graph alignment aims to find correspondent nodes between two graphs. Most existing algorithms assume that correspondent nodes in different graphs have similar local structures. However, this principle may not apply to some real-world application scenarios when two graphs have different densities. Some correspondent node pairs may have very different local structures in these cases. Nevertheless, correspondent nodes are expected to have similar importance, inspiring us to exploit global topology consistency for graph alignment. This paper presents GTCAlign, an unsupervised graph alignment framework based on global topology consistency. An indicating matrix is calculated to show node pairs with consistent global topology based on a comprehensive centrality metric. A graph convolutional network (GCN) encodes local structural and attributive information into low-dimensional node embeddings. Then, node similarities are computed based on the obtained node embeddings under the guidance of the indicating matrix. Moreover, a pair of nodes are more likely to be aligned if most of their neighbors are aligned, motivating us to develop an iterative algorithm to refine the alignment results recursively. We conduct extensive experiments on real-world and synthetic datasets to evaluate the effectiveness of GTCAlign. The experimental results show that GTCAlign outperforms state-of-the-art graph alignment approaches.
Conference Paper
Swarm robotics, particularly drone swarms, are used in various safety-critical tasks. While a lot of attention has been given to improving swarm control algorithms for improved intelligence, the security implications of various design choices in swarm control algorithms have not been studied. We highlight how an attacker can exploit the vulnerabilities in swarm control algorithms to disrupt drone swarms. Specifically, we show that the attacker can target a swarm member (target drone) through GPS spoofing attacks, and indirectly cause other swarm members (victim drones) to veer from their course, resulting in a collision with an obstacle. We call these Swarm Propagation Vulnerabilities. In this paper, we introduce SwarmFuzz, a fuzzing framework to capture the attacker’s ability, and efficiently find such vulnerabilities in swarm control algorithms. SwarmFuzz uses a combination of graph theory and gradient-guided optimization to find the potential attack parameters. Our evaluation on a popular swarm control algorithm shows that SwarmFuzz achieves an average success rate of 48.8% in finding vulnerabilities, and compared to random fuzzing, has a 10x higher success rate, and 3x lower runtime. We also find that swarms of a larger size are more vulnerable to this attack type, for a given spoofing distance.
Article
Full-text available
Contact network models are recent alternatives to equation-based models in epidemiology. In this paper, the spread of disease is modeled on contact networks using bond percolation. The weight of the edges in the contact graphs is determined as a function of several variables in which case the weight is the product of the probabilities of independent events involving each of the variables. In the first experiment, the weight of the edges is computed from a single variable involving the number of passengers on flights between two cities within the United States, and in the second experiment, the weight of the edges is computed as a function of several variables using data from 2012 Kenyan household contact networks. In addition, the paper explored the dynamics and adaptive nature of contact networks. The results from the contact network model outperform the equation-based model in estimating the spread of the 1918 Influenza virus.
Article
When an individual is socially connected to two others, the resulting triplet can be closed (if the two social partners are themselves connected) or open (if they are not connected). The proportion of closed triplets, referred to as the binary network transitivity, is a classic measure of the level of interconnectedness of a social network. However, in any given triplet, if the closing link is weak, or indeed if any of the links in the triplet is weak, then the triplet should not contribute as much to network transitivity as if all three links were equally strong. I propose two ways to weight the contribution of each triplet according to the dissimilarity between the three links in the triplet. Empirically, the resulting new metrics conveyed information not picked up by any other network-level metric. I envision that this approach could prove useful in studies of triadic mechanisms, i.e., situations where pre-existing social ties influence the interactions with third parties. These metrics could also serve as repeatable synthetic variables that summarize information about the variability of the strength of social connections.
Article
Full-text available
Systems as diverse as genetic networks or the world wide web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature is found to be a consequence of the two generic mechanisms that networks expand continuously by the addition of new vertices, and new vertices attach preferentially to already well connected sites. A model based on these two ingredients reproduces the observed stationary scale-free distributions, indicating that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems.
Article
Full-text available
A solution for the time- and age-dependent connectivity distribution of a growing random network is presented. The network is built by adding sites that link to earlier sites with a probability A(k) which depends on the number of preexisting links k to that site. For homogeneous connection kernels, A(k) approximately k(gamma), different behaviors arise for gamma<1, gamma>1, and gamma = 1. For gamma<1, the number of sites with k links, N(k), varies as a stretched exponential. For gamma>1, a single site connects to nearly all other sites. In the borderline case A(k) approximately k, the power law N(k) approximately k(-nu) is found, where the exponent nu can be tuned to any value in the range 2<nu<infinity.
Article
The abstract for this document is available on CSA Illumina.To view the Abstract, click the Abstract button above the document title.
Article
scheme,provides a satisfactory abstract model of both phenomena. It , in the same direc- tion that we shall look for an explanation of the observed close similarities among,the five
Book
In this second edition of the now classic text, the already extensive treatment given in the first edition has been heavily revised by the author. The addition of two new sections, numerous new results and 150 references means that this represents a comprehensive account of random graph theory. The theory (founded by Erdös and Rényi in the late fifties) aims to estimate the number of graphs of a given degree that exhibit certain properties. It not only has numerous combinatorial applications, but also serves as a model for the probabilistic treatment of more complicated random structures. This book, written by an acknowledged expert in the field, can be used by mathematicians, computer scientists and electrical engineers, as well as people working in biomathematics. It is self-contained, and with numerous exercises in each chapter, is ideal for advanced courses or self study.
Article
A Cumulative Advantage Distribution is proposed which models statistically the situation in which success breeds success. It differs from the Negative Binomial Distribution in that lack of success, being a non-event, is not punished by increased chance of failure. It is shown that such a stochastic law is governed by the Beta Function, containing only one free parameter, and this is approximated by a skew or hyperbolic distribution of the type that is widespread in bibliometrics and diverse social science phenomena. In particular, this is shown to be an appropriate underlying probabilistic theory for the Bradford Law, the Lotka Law, the Pareto and Zipf Distributions, and for all the empirical results of citation frequency analysis. As side results one may derive also the obsolescence factor for literature use. The Beta Function is peculiarly elegant for these manifold purposes because it yields both the actual and the cumulative distributions in simple form, and contains a limiting case of an inverse square law to which many empirical distributions conform.
Article
The weak connectivity γ of a random net is defined and computed by an approximation method as a function ofa, the axone density. It is shown that γ rises rapidly witha, attaining 0.8 of its asymptotic value (unity) fora=2, where the number of neurons in the net is arbitrarily large. The significance of this parameter is interpreted also in terms of the maximum expected spread of an epidemic under certain conditions.
Article
Article
Part I. Introduction: Networks, Relations, and Structure: 1. Relations and networks in the social and behavioral sciences 2. Social network data: collection and application Part II. Mathematical Representations of Social Networks: 3. Notation 4. Graphs and matrixes Part III. Structural and Locational Properties: 5. Centrality, prestige, and related actor and group measures 6. Structural balance, clusterability, and transitivity 7. Cohesive subgroups 8. Affiliations, co-memberships, and overlapping subgroups Part IV. Roles and Positions: 9. Structural equivalence 10. Blockmodels 11. Relational algebras 12. Network positions and roles Part V. Dyadic and Triadic Methods: 13. Dyads 14. Triads Part VI. Statistical Dyadic Interaction Models: 15. Statistical analysis of single relational networks 16. Stochastic blockmodels and goodness-of-fit indices Part VII. Epilogue: 17. Future directions.
Article
Sociometry is concerned with networks of relationship among groups of people. If the group is very large, the work of tracing all the relationships becomes tedious, and the task of describing the resulting net precisely becomes impossible. Here the problem of such large sociometric nets is approached with probabilistic and statistical methods.
Article
Inspired by empirical studies of networked systems such as the Internet, social networks, and biological networks, researchers have in recent years developed a variety of techniques and models to help us understand or predict the behavior of these systems. Here we review developments in this field, including such concepts as the small-world effect, degree distributions, clustering, network correlations, random graph models, models of network growth and preferential attachment, and dynamical processes taking place on networks.
Article
Complex networks describe a wide range of systems in nature and society, much quoted examples including the cell, a network of chemicals linked by chemical reactions, or the Internet, a network of routers and computers connected by physical links. While traditionally these systems were modeled as random graphs, it is increasingly recognized that the topology and evolution of real networks is governed by robust organizing principles. Here we review the recent advances in the field of complex networks, focusing on the statistical mechanics of network topology and dynamics. After reviewing the empirical data that motivated the recent interest in networks, we discuss the main models and analytical tools, covering random graphs, small-world and scale-free networks, as well as the interplay between topology and the network's robustness against failures and attacks. Comment: 54 pages, submitted to Reviews of Modern Physics
Graph theory. Cambridge: Perseus
  • F Harary
Introduction to graph theory. Upper Saddle River
  • D West
Who shall survive? Beacon: Beacon House
  • J Moreno