Conference Paper

The value of information for dynamic decentralised criticality computation

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Smart manufacturing uses advanced data-driven solutions to improve performance and operations resilience requiring large amounts of data delivered quickly, enabled by telecom networks and network elements such as routers or switches. Disruptions can render a network inoperable; avoiding them requires advanced responsiveness to network usage, achievable by embedding autonomy into the network, providing fast and scalable algorithms that use key metrics to manage disruptions, such as impact of failure in a network element on system functions. Centralised approaches are insufficient for this as they need time to transmit data to the controller, by which time it may have become irrelevant. Decentralised and information bounded measures solve this by placing computational agents near the data source. We propose an agent-based model to assess the value of the information for calculating decentralised criticality metrics, assigning a data collection agent to each network element, computing relevant indicators of the impact of failure in a decentralised way. This is evaluated by simulating discrete information exchange with concurrent data analysis, comparing measure accuracy to a benchmark, and with measure computation time as a proxy for computation complexity. Results show losses in accuracy are offset by faster computations with fewer network dependencies.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Chapter
Full-text available
Telecommunication networks are designed to route data along fixed pathways, and so have minimal reactivity to emergent loads. To service today’s increased data requirements, networks management must be revolutionised so as to proactively respond to anomalies quickly and efficiently. To equip the network with resilience, a distributed design calls for node agency, so that nodes can predict the emergence of critical data loads leading to disruptions. This is to inform prognostics models and proactive maintenance planning. Proactive maintenance needs KPIs, most importantly probability and impact of failure, estimated by criticality which is the negative impact on connectedness in a network resulting from removing some element. In this paper, we studied criticality in the sense of increased incidence of data congestion caused by a node being unable to process new data packets. We introduce three novel, distributed measures of criticality which can be used to predict the behaviour of dynamic processes occurring on a network. Their performance is compared and tested on a simulated diffusive data transfer network. The results show potential for the distributed dynamic criticality measures to predict the accumulation of data packet loads within a communications network. These measures are predicted to be useful in proactive maintenance and routing for telecommunications, as well as informing businesses of partner criticality in supply networks.
Article
Full-text available
Long-haul backbone communication networks provide internet services across a region or a country. The access to internet at smaller areas and the functioning of other critical infrastructures rely on the long-haul backbone high speed services and resilience. Hence, such networks are key for the decision-making of internet service managers and providers, as well as for the management and control of other critical infrastructures. This paper proposes a critical link analysis of the physical infrastructure of the UK internet backbone network from a dynamic, complex network approach. To this end, perturbation network analyses provide a natural framework to measure the network tolerance facing structural or topological modifications. Furthermore, there have been taken into account variations on data-traffic for the internet backbone that usually happen in a typical day. The novelty of the proposal is, then, twofold: proposing a weighted (traffic informed) Laplacian matrix to compute a perturbation centrality measure, and enhancing it by a time-dependent perturbation analysis to detect changes in link criticality within the network, coming from data traffic variation in a day. The results show which are the most critical links at every time of the day, being of main importance for protection, maintenance and mitigation plans for the UK internet backbone.
Conference Paper
Full-text available
Mesa is an agent-based modeling framework written in Python. Originally started in 2013, it was created to be the go-to tool in for researchers wishing to build agent-based models with Python. Within this paper we present Mesa's design goals, along with its underlying architecture. This includes its core components: 1) the model (Model, Agent, Schedule, and Space), 2) analysis (Data Collector and Batch Runner) and the visualization (Visualization Server and Visualization Browser Page). We then discuss how agent-based models can be created in Mesa. This is followed by a discussion of applications and extensions by other researchers to demonstrate how Mesa design is decoupled and extensible and thus creating the opportunity for a larger decentralized ecosystem of packages that people can share and reuse for their own needs. Finally, the paper concludes with a summary and discussion of future development areas for Mesa.
Article
Full-text available
Many future innovative computing services will use Fog Computing Systems (FCS), integrated with Internet of Things (IoT) resources. These new services, built on the convergence of several distinct technologies, need to fulfil time-sensitive functions, provide variable levels of integration with their environment, and incorporate data storage, computation, communications, sensing, and control. There are, however, significant problems to be solved before such systems can be considered fit for purpose. The high heterogeneity, complexity, and dynamics of these resource-constrained systems bring new challenges to their robust and reliable operation, which implies the need for integral resilience management strategies. This paper surveys the state of the art in the relevant fields, and discusses the research issues and future trends that are emerging. We envisage future applications that have very stringent requirements, notably high-precision latency and synchronization between a large set of flows, where FCSs are key to supporting them. Thus, we hope to provide new insights into the design and management of resilient FCSs that are formed by IoT devices, edge computer servers and wireless sensor networks; these systems can be modelled using Game Theory, and flexibly programmed with the latest software and virtualization platforms.
Article
Full-text available
The complexity of large-scale network systems made of a large number of nonlinearly interconnected components is a restrictive facet for their modeling and analysis. In this paper, we propose a framework of hierarchical modeling of a complex network system, based on a recursive unsupervised spectral clustering method. The hierarchical model serves the purpose of facilitating the management of complexity in the analysis of real-world critical infrastructures. We exemplify this by referring to the reliability analysis of the 380 kV Italian Power Transmission Network (IPTN). In this work of analysis, the classical component Importance Measures (IMs) of reliability theory have been extended to render them compatible and applicable to a complex distributed network system. By utilizing these extended IMs, the reliability properties of the IPTN system can be evaluated in the framework of the hierarchical system model, with the aim of providing risk managers with information on the risk/safety significance of system structures and components.
Article
Full-text available
Centrality indices are often used to analyze the functionality of nodes in a communication network. Up to date most analyses are done on static networks where some entity has global knowledge of the networks properties. To expand the scope of these analyzing methods to decentral networks we will propose here a general framework for decentral algorithms that calculate centrality indices. We will describe variants of the general algorithm to calculate four different centralities, with emphasis on the algorithm of the betweenness centrality. The betweenness centrality is the most complex measure and best suited for describing network communication based on shortest paths and predicting the congestion sensitivity of a network. The communication complexity of this latter algorithm is asymp-totically optimal and the time complexity scales with the diameter of the network. The calculated centrality index can be used to adapt the communication network to given constraints and changing demands such that the relevant properties like the diameter of the network or uniform distribution of energy consumption is optimized.
Conference Paper
Full-text available
We propose a methodology to locate the most critical nodes to network robustness in a fully distributed way. Such critical nodes may be thought of as those most related to the notion of network centrality. Our proposal relies only on a localized spectral analysis of a limited neighborhood around each node in the network. We also present a procedure allowing the navigation from any node towards a critical node following only local information computed by the proposed algorithm. Experimental results confirm the effectiveness of our proposal considering networks of different scales and topological characteristics.
Conference Paper
Full-text available
Assessing network vulnerability before potential disruptive events such as natural disasters or malicious attacks is vital for network planning and risk management. It enables us to seek and safeguard against most destructive scenarios in which the overall network connectivity falls dramatically. Existing vulnerability assessments mainly focus on investigating the inhomogeneous properties of graph elements, node degree for example, however, these measures and the corresponding heuristic solutions can provide neither an accurate evaluation over general network topologies, nor performance guarantees to large scale networks. To this end, in this paper, we investigate a measure called pairwise connectivity and formulate this vulnerability assessment problem as a new graph-theoretical optimization problem called ß-disruptor, which aims to discover the set of critical node/edges, whose removal results in the maximum decline of the global pairwise connectivity. Our results consist of the NP-Completeness and inapproximability proof of this problem, an O(log n log log n) pseudo-approximation algorithm for detecting the set of critical nodes and an O(log<sup>1.5</sup> n) pseudo-approximation algorithm for detecting the set of critical edges. In addition, we devise an efficient heuristic algorithm and validate the performance of the our model and algorithms through extensive simulations.
Article
Full-text available
A Family of new measures of point and graph centrality based on early intuitions of Bavelas (1948) is introduced. These measures define centrality in terms of the degree to which a point falls on the shortest path between others and therefore has a potential for control of communication. They may be used to index centrality in any large or small network of symmetrical relations, whether connected or unconnected.
Article
Full-text available
Betweenness centrality lies at the core of both transport and structural vulnerability properties of complex networks, however, it is computationally costly, and its measurement for networks with millions of nodes is near impossible. By introducing a multiscale decomposition of shortest paths, we show that the contributions to betweenness coming from geodesics not longer than L obey a characteristic scaling vs L, which can be used to predict the distribution of the full centralities. The method is also illustrated on a real-world social network of 5.5*10^6 nodes and 2.7*10^7 links.
Chapter
Dynamic self-forming/self-healing communication networks that exchange IP traffic are known as mobile ad hoc networks (MANET). The performance and vulnerabilities in such networks and their dependence on continuously changing network topologies under a range of conditions are not fully understood. In this work, we investigate the relationship between network topologies and performance of a 128-node packet-based network composed of four 32-node communities, by simulating packet exchange between network nodes. In the first approximation, the proposed model may represent a company of soldiers consisting of four platoons, where each soldier is equipped with MANET-participating radio. In this model, every network node is a source of network traffic, a potential destination for network packets, and also performs routing of network packets destined to other nodes. We used the Girvan-Newman benchmark to generate random networks with certain community structures. The interaction strength between the communities was expressed in terms of the relative number of network links. The average packet travel time was used as the proxy for network performance. To simulate a network attack, selected subsets of connections between nodes were disabled, and the performance of the network was observed. As expected, the simulations show that the average packet travel time between communities of users (i.e. between platoons) is more strongly affected by the degree of mixing compared to the average packet travel time within a community of users (i.e. within an individual platoon). While the conditions presented here simulate a relatively mild attack or interference, simulation results indicate significant effects on the average packet travel time between communities.
Article
Real networks exhibit heterogeneous nature with nodes playing far different roles in structure and function. To identify vital nodes is thus very significant, allowing us to control the outbreak of epidemics, to conduct advertisements for e-commercial products, to predict popular scientific publications, and so on. The vital nodes identification attracts increasing attentions from both computer science and physical societies, with algorithms ranging from simply counting the immediate neighbors to complicated machine learning and message passing approaches. In this review, we clarify the concepts and metrics, classify the problems and methods, as well as review the important progresses and describe the state of the art. Furthermore, we provide extensive empirical analyses to compare well-known methods on disparate real networks, and highlight the future directions. In despite of the emphasis on physics-rooted approaches, the unification of the language and comparison with cross-domain methods would trigger interdisciplinary solutions in the near future.
Article
This document is intended to provide background information for offerers responding to BAA 95-40: Evolutionary Design of Complex Software (EDCS). It describes the general problem that the EDCS Program addresses along with some of the characteristics ...
Conference Paper
Communication networks, in particular the Internet, face a wide spectrum of challenges that can disrupt our daily lives. We define challenges as adverse events triggering faults that eventually result in service failures. Understanding these challenges accordingly is essential for the improvement of the current networks and for designing Future Internet architectures. In this paper, we present a taxonomy of network challenges based on past and potential events. Moreover, we describe how the challenges correlate with our taxonomy. We believe that such a taxonomy is valuable for evaluating design choices as well as establishing a common terminology among researchers.
Article
Identifying influential nodes that lead to faster and wider spreading in complex networks is of theoretical and practical significance. The degree centrality method is very simple but of little relevance. Global metrics such as betweenness centrality and closeness centrality can better identify influential nodes, but are incapable to be applied in large-scale networks due to the computational complexity. In order to design an effective ranking method, we proposed a semi-local centrality measure as a tradeoff between the low-relevant degree centrality and other time-consuming measures. We use the Susceptible–Infected–Recovered (SIR) model to evaluate the performance by using the spreading rate and the number of infected nodes. Simulations on four real networks show that our method can well identify influential nodes.
Article
Egocentric centrality measures (for data on a node’s first-order zone) parallel to Freeman’s [Social Networks 1 (1979) 215] centrality measures for complete (sociocentric) network data are considered. Degree-based centrality is in principle identical for egocentric and sociocentric network data. A closeness measure is uninformative for egocentric data, since all geodesic distances from ego to other nodes in the first-order zone are 1 by definition. The extent to which egocentric and sociocentric versions of Freeman’s betweenness centrality measure correspond is explored empirically. Across seventeen diverse networks, that correspondence is found to be relatively close—though variations in egocentric network composition do lead to some notable differences in egocentric and sociocentric betweennness. The findings suggest that research design has a relatively modest impact on assessing the relative betweenness of nodes, and that a betweenness measure based on egocentric network data could be a reliable substitute for Freeman’s betweenness measure when it is not practical to collect complete network data. However, differences in the research methods used in sociocentric and egocentric studies could lead to additional differences in the respective betweenness centrality measures.
Article
In a system whose functioning or failure depends on the functioning or failure of its components, some components may play a more important part than others. A quantitative definition of this notion of importance is proposed in the present paper for systems with coherent structures, assuming (1) that only the structure of the system is known, or (2) that also the reliabilities of all components are known. Some theoretical properties of the so defined concepts are discussed, and applications are presented to such problems as allocation of spare parts or appropriation of funds for improvement of component reliability.
Article
A complex network can be modeled as a graph representing the “who knows who” relationship. In the context of graph theory for social networks, the notion of centrality is used to assess the relative importance of nodes in a given network topology. For example, in a network composed of large dense clusters connected through only a few links, the nodes involved in those links are particularly critical as far as the network survivability is concerned. This may also impact any application running on top of it. Such information can be exploited for various topological maintenance issues to prevent congestion and disruption. This can also be used offline to identify the most important actors in large social interaction graphs. Several forms of centrality have been proposed so far. Yet, they suffer from imperfections: initially designed for small social graphs, they are either of limited use (degree centrality), either incompatible in a distributed setting (e.g. random walk betweenness centrality).In this paper we introduce a novel form of centrality: the second order centrality which can be computed in a distributed manner. This provides locally each node with a value reflecting its relative criticity and relies on a random walk visiting the network in an unbiased fashion. To this end, each node records the time elapsed between visits of that random walk (called return time in the sequel) and computes the standard deviation (or second order moment) of such return times. The key point is that central nodes see regularly the random walk compared to other topology nodes. Both through theoretical analysis and simulation, we show that the standard deviation can be used to accurately identify critical nodes as well as to globally characterize graphs topology in a distributed way. We finally compare our proposal to well-known centralities to assess its competitivity.
Conference Paper
Centrality is a concept often used in social network analysis to study different properties of networks that are modeled as graphs. We present a new centrality metric called localized bridging centrality (LBC). LBC is based on the bridging centrality (BC) metric that Hwang et al. recently introduced. Bridging nodes are nodes that are strategically located in between highly connected regions. LBC is capable of identifying bridging nodes with an accuracy comparable to that of the BC metric for most networks. As the name suggests, we use only local information from surrounding nodes to compute the LBC metric, whereas, global knowledge is required to calculate the BC metric. The main difference between LBC and BC is that LBC uses the egocentric definition of betweenness centrality to identify bridging nodes, while BC uses the sociocentric definition of betweenness centrality. Thus, our LBC metric is suitable for distributed or parallel computation and has the benefit of being an order of magnitude faster to calculate in computational complexity. We compare the results produced by BC and LBC in three examples. We applied our LBC metric for network analysis of a real wireless mesh network. Our results indicate that the LBC metric is as powerful as the BC metric at identifying bridging nodes. The LBC metric is thus an important tool that can help network administrators identify critical nodes that are important for the robustness of the network in a distributed manner.
Reliability Importance Measures for Availability Enhancement in Drinking Water Networks
  • J C Salazar
  • F Nejjari
  • R Sarrate
  • P Weber
  • D Theilliol
Salazar, J.C., Nejjari, F., Sarrate, R., Weber, P., and Theilliol, D. (2016). Reliability Importance Measures for Availability Enhancement in Drinking Water Networks. Technical report.
Complex Dynamics in Communication Networks
  • A Veres
  • M Boda
Veres, A. and Boda, M. (2005). Complex Dynamics in Communication Networks. Springer: Complexity.