## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

This chapter is devoted to the description and computation of the A c c e s s i b i l i t y in Static and Dynamic Risk. As we will see, this parameter is essential for the computation of both types of intentional risks.

To read the full-text of this research,

you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.

The application of the network approach to the urban case poses several questions in terms of how to deal with metric distances, what kind of graph representation to use, what kind of measures to investigate, how to deepen the correlation between measures of the structure of the network and measures of the dynamics on the network, what are the possible contributions from the GIS community. In this paper, the authors addresses a study of six cases of urban street networks characterised by different patterns and historical roots. The authors propose a representation of the street networks based firstly on a primal graph, where intersections are turned into nodes and streets into edges. In a second step, a dual graph, where streets are nodes and intersections are edges, is constructed by means of an innovative generalisation model named Intersection Continuity Negotiation, which allows to acknowledge the continuity of streets over a plurality of edges. Finally, the authors address a comparative study of some structural properties of the networks, seeking significant similarities among clusters of cases. A wide set of network analysis techniques are implemented over the dual graph: in particular the authors show that most of the considered networks have a broad degree distribution typical of scale-free networks and exhibit small-world properties as well.

In our previous article we defined temporal quantities used for the description of temporal networks with zero latency and we showed that some centrality measures (e.g. degree, betweenness, closeness) can be extended to the case of temporal networks. In this article we broaden the scope of centrality measures in temporal networks to centrality measures derived from the eigenvectors of network matrices, namely the eigenvector in-centrality, the eigenvector out-centrality, the Katz centrality, the Bonacich α and (α; β)-centrality, the HITS algorithm (also known as Hubs and Authorities) introduced by Kleinberg, and the PageRank algorithm defined by Page and Brin. We extended our Python library TQ (Temporal Quantities) to include the algorithms from our research. The library is available online. The procedures will also be added to the user friendly program called Ianus. We tested the proposed algorithms on Franzosi's violence network and on Corman's Reuter terror news network and show the results.

The paper attempts to develop a suitable accessibility index for networks where each link has a value such that a
smaller number is preferred like distance, cost, or travel time. A measure called distance sum is characterized
by three independent properties: anonymity, an appropriately chosen independence axiom, and dominance preservation, which requires that a node not far to any other is at least as accessible.
We argue for the need of eliminating the independence property in certain applications. Therefore generalized
distance sum, a family of accessibility indices, will be suggested. It is linear, considers the accessibility of vertices besides their distances and depends on a parameter in order to control its deviation from distance sum. Generalized distance sum is anonymous and satisfies dominance preservation if its parameter meets a sufficient condition. Two detailed examples demonstrate its ability to reflect the vulnerability of accessibility to link disruptions.

The new concept of multilevel network is introduced in order to embody some topological properties of complex systems with structures in the mesoscale which are not completely captured by the classical models. This new model, which generalizes the hyper-network and hyper-structure models, fits perfectly with several real-life complex systems, including social and public transportation networks. We present an analysis of the structural properties of the multilevel network, including the clustering and the metric structures. Some analytical relationships amongst the efficiency and clustering coefficient of this new model and the corresponding parameters of the underlying network are obtained. Finally some random models for multilevel networks are given to illustrate how different multilevel structures can produce similar underlying networks and therefore that the mesoscale structure should be taken into account in many applications.

We analyze the distribution of PageRank on a directed configuration model and
show that as the size of the graph grows to infinity it can be closely
approximated by the PageRank of the root node of an appropriately constructed
tree. This tree approximation is in turn related to the solution of a linear
stochastic fixed point equation that has been thoroughly studied in the recent
literature.

Despite its increasing role in communication, the world wide web remains the least controlled medium: any individual or institution can create websites with unrestricted number of documents and links. While great efforts are made to map and characterize the Internet's infrastructure, little is known about the topology of the web. Here we take a first step to fill this gap: we use local connectivity measurements to construct a topological model of the world wide web, allowing us to explore and characterize its large scale properties. Comment: 5 pages, 1 figure, updated with most recent results on the size of the www

Network-analysis literature is rich in node-centrality mea- sures that quantify the centrality of a node as a function of the (shortest) paths of the network that go through it. Existing work focuses on defining instances of such mea- sures and designing algorithms for the specific combinato- rial problems that arise for each instance. In this work, we propose a unifying definition of centrality that subsumes all path-counting based centrality definitions: e.g., stress, be- Tweenness or paths centrality. We also define a generic algo- rithm for computing this generalized centrality measure for every node and every group of nodes in the network. Next, we define two optimization problems: k-Group Central- ity Maximization and k-Edge Centrality Boosting. In the former, the task is to identify the subset of k nodes that have the largest group centrality. In the latter, the goal is to identify up to k edges to add to the network so that the centrality of a node is maximized. We show that both of these problems can be solved efficiently for arbitrary central- ity definitions using our general framework. In a thorough experimental evaluation we show the practical utility of our framework and the efficacy of our algorithms. Copyright

Most real and engineered systems include multiple subsystems and layers of
connectivity, and it is important to take such features into account to try to
improve our understanding of these systems. It is thus necessary to generalize
"traditional" network theory by developing (and validating) a framework and
associated tools to study multilayer systems in a comprehensive fashion. The
origins of such efforts date back several decades and arose in multiple
disciplines, and now the study of multilayer networks has become one of the
most important directions in network science. In this paper, we discuss the
history of multilayer networks (and related concepts) and review the exploding
body of work on such networks. To unify the disparate terminology in the large
body of recent work, we discuss a general framework for multilayer networks,
construct a dictionary of terminology to relate the numerous existing concepts
to each other, and provide a thorough discussion that compares, contrasts, and
translates between related notions such as multilayer networks, multiplex
networks, interdependent networks, networks of networks, and many others. We
also survey and discuss existing data sets that can be represented as
multilayer networks. We review attempts to generalize single-layer-network
diagnostics to multilayer networks. We also discuss the rapidly expanding
research on multilayer-network models and notions like community detection,
connected components, tensor decompositions, and various types of dynamical
processes on multilayer networks. We conclude with a summary and an outlook.

Current research on the spectral theory of finite graphs may be seen as part of a wider effort to forge closer links between algebra and combinatorics (in particular between linear algebra and graph theory).This book describes how this topic can be strengthened by exploiting properties of the eigenspaces of adjacency matrices associated with a graph. The extension of spectral techniques proceeds at three levels: using eigenvectors associated with an arbitrary labelling of graph vertices, using geometrical invariants of eigenspaces such as graph angles and main angles, and introducing certain kinds of canonical eigenvectors by means of star partitions and star bases. One objective is to describe graphs by algebraic means as far as possible, and the book discusses the Ulam reconstruction conjecture and the graph isomorphism problem in this context. Further problems of graph reconstruction and identification are used to illustrate the importance of graph angles and star partitions in relation to graph structure. Specialists in graph theory will welcome this treatment of important new research.

Systems as diverse as genetic networks or the World Wide Web are best described as networks with complex topology. A common property of many large networks is that the vertex connectivities follow a scale-free power-law distribution. This feature was found to be a consequence of two generic mech-anisms: (i) networks expand continuously by the addition of new vertices, and (ii) new vertices attach preferentially to sites that are already well connected. A model based on these two ingredients reproduces the observed stationary scale-free distributions, which indicates that the development of large networks is governed by robust self-organizing phenomena that go beyond the particulars of the individual systems.

This study considers a class of critical node detection problems that involves minimization of a distance-based connectivity measure of a given unweighted graph via the removal of a subset of nodes (referred to as critical nodes) subject to a budgetary constraint. The distance-based connectivity measure of a graph is assumed to be a function of the actual pairwise distances between nodes in the remaining graph (e.g., graph efficiency, Harary index, characteristic path length, residual closeness) rather than simply whether nodes are connected or not, a typical assumption in the literature. We derive linear integer programming (IP) formulations, along with additional enhancements, aimed at improving the performance of standard solvers. For handling larger instances, we develop an effective exact algorithm that iteratively solves a series of simpler IPs to obtain an optimal solution for the original problem. The edge-weighted generalization is also considered, which results in some interesting implications for distance-based clique relaxations, namely, -clubs. Finally, we conduct extensive computational experiments with real-world and randomly generated network instances under various settings that reveal interesting insights and demonstrate the advantages and limitations of the proposed approach. In particular, one important conclusion of our work is that vulnerability of real-world networks to targeted attacks can be significantly more pronounced than what can be estimated by centrality-based heuristic methods commonly used in the literature. © 2015 Wiley Periodicals, Inc. NETWORKS, 2015

We briefly describe the toolkit used for studying complex systems: nonlinear dynamics, statistical physics, and network theory. We place particular emphasis on network theory--the topic of this special issue--and its importance in augmenting the framework for the quantitative study of complex systems. In order to illustrate the main issues, we briefly review several areas where network theory has led to significant developments in our understanding of complex systems. Specifically, we discuss changes, arising from network theory, in our understanding of (i) the Internet and other communication networks, (ii) the structure of natural ecosystems, (iii) the spread of diseases and information, (iv) the structure of cellular signalling networks, and (v) infrastructure robustness. Finally, we discuss how complexity requires both new tools and an augmentation of the conceptual framework--including an expanded definition of what is meant by a “quantitative prediction.”

In this work we propose a model for the diffusion of information in a complex network. The main assumption of the model is that the information is initially located at certain nodes and then is disseminated, with occasional losses when traversing the edges, to the rest of the network. We present two efficient algorithms, which we called max-path and sum-path, to compute, respectively, lower and upper bounds for the amount of information received at each node. Finally we provide an application of these algorithms to intentional risk analysis.

CD-ROM contains: Searchable copy of textbook and all solutions -- Additional references -- Thumbnail sketches and photographs of mathematicians -- History of linear algebra and computing.

CD-ROM contains: Searchable copy of textbook and all solutions -- Additional references -- Thumbnail sketches and photographs of mathematicians -- History of linear algebra and computing.

The uniqueness of the Perron vector of a nonnegative block matrix associated
to a multiplex network is discussed. The conclusions come from the
relationships between the irreducibility of some nonnegative block matrix
associated to a multiplex network and the irreducibility of the corresponding
matrices to each layer as well as the irreducibility of the adjacency matrix of
the projection network. In addition the computation of that Perron vector in
terms of the Perron vectors of the blocks is also addressed. Finally we present
the precise relations that allow to express the Perron eigenvector of the
multiplex network in terms of the Perron eigenvectors of its layers.

In the past years, network theory has successfully characterized the
interaction among the constituents of a variety of complex systems, ranging
from biological to technological, and social systems. However, up until
recently, attention was almost exclusively given to networks in which all
components were treated on equivalent footing, while neglecting all the extra
information about the temporal- or context-related properties of the
interactions under study. Only in the last years, taking advantage of the
enhanced resolution in real data sets, network scientists have directed their
interest to the multiplex character of real-world systems, and explicitly
considered the time-varying and multilayer nature of networks. We offer here a
comprehensive review on both structural and dynamical organization of graphs
made of diverse relationships (layers) between its constituents, and cover
several relevant issues, from a full redefinition of the basic structural
measures, to understanding how the multilayer nature of the network affects
processes and dynamics.

The centrality and efficiency measures of an undirected network G were shown by the authors to be strongly related to the respective measures on the associated line graph L(G). In this note we extend this study to a directed network G → and its associated directed network L →(G →). The Bonacich centralities of these two networks are shown to be related in a surprisingly simpler manner than in the non directed case. Efficiency is also considered and the corresponding relations established. In addition, an estimation of the clustering coefficient of L →(G →) is given in terms of the clustering coefficient of G →, and by means of an example we show that a reverse estimation cannot be expected. Given a non directed graph G, there is a natural way to obtain from it a directed line graph, namely L →(D(G)), where the directed graph D(G) is obtained from G in the usual way. With this approach the authors estimate some parameters of L →(D(G)) in terms of the corresponding ones in L(G). Particularly, we give an estimation of the norm difference between the centrality vectors of L →(D(G)) and L(G) in terms of the Collatz-Sinogowitz index (which is a measure of the irregularity of G). Analogous estimations are given for the efficiency measures. The results obtained strongly suggest that for a given non directed network G, the directed line graph L →(D(G)) captures more adequately the properties of G than the non directed line graph L(G).

Preface 1. Introduction to techniques 2. Generating functions I 3. Generating functions II: recurrence, sites visited, and the role of dimensionality 4. Boundary conditions, steady state, and the electrostatic analogy 5. Variations on the random walk 6. The shape of a random walk 7. Path integrals and self-avoidance 8. Properties of the random walk: introduction to scaling 9. Scaling of walks and critical phenomena 10. Walks and the O(n) model: mean field theory and spin waves 11. Scaling, fractals, and renormalization 12. More on the renormalization group References Index.

We consider the problem of finding the most and the least “influential” or “influenceable” cliques in graphs based on three classical centrality measures: degree, closeness and betweenness. In addition to standard clique betweenness, which is defined as the proportion of shortest paths between any two graph nodes that pass through the clique, we also consider its optimistic and pessimistic versions, where outside nodes may favor or try to avoid shortest paths passing through the clique, respectively. We discuss the computational complexity issues for these problems and develop linear 0-1 programming formulations for each centrality metric. Finally, we demonstrate the performance of the developed formulations on real-life and synthetic networks, and provide some interesting insights based on the obtained results. In particular, our findings indicate that there are considerable variations between the centrality values of large cliques within the same networks. Moreover, the most central cliques in graphs are not necessarily the largest ones.

From the Internet to networks of friendship, disease transmission, and even terrorism, the concept--and the reality--of networks has come to pervade modern society. But what exactly is a network? What different types of networks are there? Why are they interesting, and what can they tell us? In recent years, scientists from a range of fields--including mathematics, physics, computer science, sociology, and biology--have been pursuing these questions and building a new "science of networks." This book brings together for the first time a set of seminal articles representing research from across these disciplines. It is an ideal sourcebook for the key research in this fast-growing field. The book is organized into four sections, each preceded by an editors' introduction summarizing its contents and general theme. The first section sets the stage by discussing some of the historical antecedents of contemporary research in the area. From there the book moves to the empirical side of the science of networks before turning to the foundational modeling ideas that have been the focus of much subsequent activity. The book closes by taking the reader to the cutting edge of network science--the relationship between network structure and system dynamics. From network robustness to the spread of disease, this section offers a potpourri of topics on this rapidly expanding frontier of the new science.

An efficient and computationally advantageous definition of vulnerability of a complex network is introduced, through which one is able to overcome a series of practical difficulties encountered by the measurements used so far to quantify a network's security and stability under the effects of failures, attacks or disfunctions. By means of this approach, we prove a series of theorems that allow to gather information on the ranking of the nodes of a network with respect to their strategic importance in order to preserve the functioning and performance of the network as a whole.

A novel node-based approach to quantify the vulnerability of a complex network is presented. The proposed measure represents a multiscale evaluation of vulnerability closely related to the edge multiscale vulnerability. In fact, a linear relationship between the edge multiscale vulnerability and the node multiscale vulnerability is stated for p = 1. Upper and lower bounds are established for other values of p. This mathematical framework is subsequently used to obtain some interesting results about the Madrid underground network.

This study presents an integer programming framework for minimizing the connectivity and cohesiveness properties of a given graph by removing nodes and edges subject to a joint budgetary constraint. The connectivity and cohesiveness metrics are assumed to be general functions of sizes of the remaining connected components and node degrees, respectively. We demonstrate that our approach encompasses, as special cases (possibly, under some mild conditions), several other models existing in the literature, including minimization of the total number of connected node pairs, minimization of the largest connected component size, and maximization of the number of connected components. We discuss computational complexity issues, derive linear mixed integer programming (MIP) formulations, and describe additional modeling enhancements aimed at improving the performance of MIP solvers. We also conduct extensive computational experiments with real-life and randomly generated network instances under various settings that reveal interesting insights and demonstrate advantages and limitations of the proposed framework.

Complex networks model real networks found in a wide range of domains from biological and social to technological environments. Despite this range of applications and contexts in which complex networks are used as models, studies suggest that many real networks are governed by a similar dynamic. An important characteristic is that in general such networks are robust against failures but vulnerable against targeted attacks. On the other hand, ecological and biological systems show a peculiar characteristic: resilience. It means that they have a high capacity to absorb changes without dramatic modifications, probably as a result of learning and adaptation processes. However, resilience mechanisms are not present per se in technological networks. Thus, this work presents a framework for vulnerability assessment and management in technological networks using as insight resilience in ecological and biological systems.

In this paper we consider the problem of detecting critical elements in networks. The objective of these problems is to identify a subset of elements (i.e., nodes, arcs, paths, cliques, etc.) whose deletion minimizes a given connectivity measure over the resulting network. This paper surveys some of the recent advances for solving these kinds of problems including heuristic, mathematical programming, approximated algorithms, and dynamic programming approaches.

Preface; 1. Introduction to techniques; 2. Generating functions I; 3.
Generating functions II: recurrence, sites visited, and the role of
dimensionality; 4. Boundary conditions, steady state, and the
electrostatic analogy; 5. Variations on the random walk; 6. The shape of
a random walk; 7. Path integrals and self-avoidance; 8. Properties of
the random walk: introduction to scaling; 9. Scaling of walks and
critical phenomena; 10. Walks and the O(n) model: mean field theory and
spin waves; 11. Scaling, fractals, and renormalization; 12. More on the
renormalization group; References; Index.

The concept of vulnerability in the context of complex networks quantifies the capacity of a network to maintain its functional performance under random damages, malicious attacks, or malfunctions of any kind. Different types of networks and different applications suggest different approaches to the concept of networks structural vulnerability depending on the aspect we focus upon. In this introductory chapter, we discuss some different approaches and relationships amongst them.

We investigate the relationship between the structure of a graph and its eigenspaces. The angles between the eigenspaces and the vectors of the standard basis of n play an important role. The key notion is that of a special basis for an eigenspace called a star basis. Star bases enable us to define a canonical bases of n associated with a graph, and to formulate an algorithm for graph isomorphism.

Network analysis begins with data that describes the set of relationships among the members of a system. The goal of analysis is to obtain from the low-level relational data a higher-level description of the structure of the system which identifies various kinds of patterns in the set of relationships. These patterns will be based on the way individuals are related to other individuals in the network. Some approaches to network analysis look for clusters of individuals who are tightly connected to one another; some look for sets of individuals who have similar patterns of relations to the rest of the network. Other methods don't "look for" anything in particular, instead, they construct a continuous multidimensional representation of the network in which the coordinates of the individuals can be further analyzed to obtain a variety of kinds of information about them and their relation to the rest of the network. One approach to this is to choose a set of axes in the multidimensional space occupied by the network and rotate them so that the first axis points in the direction of the greatest variability in the data; the second axis, orthogonal to the first, points in the direction of greatest remaining variability, and so on. This set of axes is a coordinate system that can be used to describe the relative positions of the set of points in the data. Most of the variability in the locations of points will be accounted for by the first few dimensions of this coordinate system. The coordinates of the points along each axis will be an eigenvector, and the length of the projection will be an eigenvalue. The set of all eigenvalues is the spectrum of the network. Spectral methods (eigendecomposition) have been a part of graph theory for over a century. Network researchers have used spectral methods either implicitly or explicitly since the late 1960's, when computers became generally accessible in most universities. The eigenvalues of a network are intimately connected to important topological features such as maximum distance across the network (diameter), presence of cohesive clusters, long paths and bottlenecks, and how random the network is. The associated eigenvectors can be used as a natural coordinate system for graph visualization; they also provide methods for discovering clusters and other local features. When combined with other, easily obtained network statistics (e. g., node degree), they can be used to describe a variety of network properties, such as degree of robustness (i. e., tolerance to removal of selected nodes or links), and other structural properties, and the relationship of these properties to node or link attributes in large, complex, multivariate networks. We introduce three types of spectral analysis for graphs and describe some of their mathematical properties. We discuss the strengths and weaknesses of each type and show how they can be used to understand network structure. These discussions are accompanied by interactive graphical displays of small (n=50) and moderately large (n=5000) networks. Throughout, we give special attention to sparse matrix methods which allow rapid, efficient storage and analysis of large networks. We briefly describe algorithms and analytic strategies that allow spectral analysis and identification of clusters in very large networks (n>1,000,000).

We give here conditions that two graphs be congruent and some theorems on the connectivity of graphs, and we conclude with some applications to dual graphs. These last theorems might also be proved by topological methods. The definitions and results of a paper by the author on “Non-separable and planar graphs,” † will be made use of constantly. We shall refer to this paper as N. For convenience, we shall say two arcs touch if they have a common vertex.

Seminal paper on algebraic connectivity of a network