Content uploaded by Manuel Herrera

Author content

All content in this area was uploaded by Manuel Herrera on May 07, 2022

Content may be subject to copyright.

IFAC PapersOnLine 55-2 (2022) 408–413

ScienceDirect

Available online at www.sciencedirect.com

2405-8963 Copyright © 2022 The Authors. This is an open access article under the CC BY-NC-ND license

.

Peer review under responsibility of International Federation of Automatic Control.

10.1016/j.ifacol.2022.04.228

10.1016/j.ifacol.2022.04.228 2405-8963

Copyright ©

2022 The Authors. This is an open access article under the CC BY-NC-ND license

(

https://creativecommons.org/licenses/by-nc-nd/4.0/

)

The value of information for dynamic

decentralised criticality computation

Yaniv Proselkov∗Manuel Herrera∗∗

Marco Perez Hernandez∗∗∗ Ajith Kumar Parlikad∗∗∗∗

Alexandra Brintrup†

Institute for Manufacturing, Dept. of Engineering, University of

Cambridge

∗yp289@cam.ac.uk; ∗∗amh226@cam.ac.uk; ∗∗∗mep53@cam.ac.uk;

∗∗∗∗aknp2@cam.ac.uk; †ab702@cam.ac.uk

Abstract: Smart manufacturing uses advanced data-driven solutions to improve performance

and operations resilience requiring large amounts of data delivered quickly, enabled by telecom

networks and network elements such as routers or switches. Disruptions can render a network

inoperable; avoiding them requires advanced responsiveness to network usage, achievable by

embedding autonomy into the network, providing fast and scalable algorithms that use key

metrics to manage disruptions, such as impact of failure in a network element on system

functions. Centralised approaches are insuﬃcient for this as they need time to transmit data

to the controller, by which time it may have become irrelevant. Decentralised and information

bounded measures solve this by placing computational agents near the data source. We propose

an agent-based model to assess the value of the information for calculating decentralised

criticality metrics, assigning a data collection agent to each network element, computing relevant

indicators of the impact of failure in a decentralised way. This is evaluated by simulating

discrete information exchange with concurrent data analysis, comparing measure accuracy to a

benchmark, and with measure computation time as a proxy for computation complexity. Results

show losses in accuracy are oﬀset by faster computations with fewer network dependencies.

Keywords: Computational Science; Discrete-event Simulation; Dynamic Systems; Intelligent

Diagnostic Methodologies; Large Scale Multi-agent Systems; Multi-agent Simulation;

Visibility; Criticality

1. INTRODUCTION

Manufacturing processes have become more data-driven

and dependent on interconnection of multiple facilities for

eﬃcient decision-making, thus telecom infrastructure is as

pervasive a component of manufacturing industries as the

powergrid and other critical infrastructures. Telecom in-

frastructures are physical networks that support internet,

telephony, and other digital services by facilitating data

transfers between users. Infrastructures are represented by

graphs, with network elements, such as routers or switches,

as nodes and connections as edges. In such ﬂow networks,

data packet congestion at a node may cause node failure.

How to monitor the impact of disruptions such as conges-

tion on the network - criticality - is important for network

behaviour control, which must be accurate and fast in net-

works that are functioning near capacity, as expected for

future backbone networks (Moura and Hutchison (2019)).

In centralised network monitoring and criticality computa-

tion, the central computational resources need information

This research was supported by the EPSRC and BT Prosperity

Partnership project: Next Generation Converged Digital Infrastruc-

ture, grant number EP/R004935/1, and the UK Engineering and

Physical Sciences Research Council (EPSRC) Doctoral Training

Partnership Award for the University of Cambridge, grant number

EP/R513180/1.

on the whole network, creating a criticality measure (CM)

(Salazar et al. (2016); Fang and Zio (2013)), requiring live

and dynamic node topology and attribute data to respond

to behavioural shifts. The increased data cause longer

computational times, so conclusions arrived at a given

point in time lose relevance, and promote critical events if

too much data is transferred. The amount of data used for

a CM should thus be minimised while preserving meaning,

achievable by imposing a limited bound around a given

node, and computing criticality in a decentralised manner.

This shortens data paths and reduces complexity by taking

information from a small region around a given node. We

call these information bounded CMs (IB-CMs). This IB-

CM can be approximated with a centrality measure used

as a criticality estimate (CE), as both deﬁne importance

within a multicomponent system (Birnbaum (1968)). The

more accurate and more eﬃciently computed an IB-CM is,

the more valuable the information used in its calculation

is, with the value of information is a multivariate measure

composed of the accuracy and eﬃciency of a given IB-CM

under some information provision.

This paper builds on Proselkov et al. (2020) to outline

a method to assess the accuracy and computational eﬃ-

ciency of IB-CMs that change with time, with respect to

a novel benchmark estimate of dynamic criticality under

The value of information for dynamic

decentralised criticality computation

Yaniv Proselkov∗Manuel Herrera∗∗

Marco Perez Hernandez∗∗∗ Ajith Kumar Parlikad∗∗∗∗

Alexandra Brintrup†

Institute for Manufacturing, Dept. of Engineering, University of

Cambridge

∗yp289@cam.ac.uk; ∗∗amh226@cam.ac.uk; ∗∗∗mep53@cam.ac.uk;

∗∗∗∗aknp2@cam.ac.uk; †ab702@cam.ac.uk

Abstract: Smart manufacturing uses advanced data-driven solutions to improve performance

and operations resilience requiring large amounts of data delivered quickly, enabled by telecom

networks and network elements such as routers or switches. Disruptions can render a network

inoperable; avoiding them requires advanced responsiveness to network usage, achievable by

embedding autonomy into the network, providing fast and scalable algorithms that use key

metrics to manage disruptions, such as impact of failure in a network element on system

functions. Centralised approaches are insuﬃcient for this as they need time to transmit data

to the controller, by which time it may have become irrelevant. Decentralised and information

bounded measures solve this by placing computational agents near the data source. We propose

an agent-based model to assess the value of the information for calculating decentralised

criticality metrics, assigning a data collection agent to each network element, computing relevant

indicators of the impact of failure in a decentralised way. This is evaluated by simulating

discrete information exchange with concurrent data analysis, comparing measure accuracy to a

benchmark, and with measure computation time as a proxy for computation complexity. Results

show losses in accuracy are oﬀset by faster computations with fewer network dependencies.

Keywords: Computational Science; Discrete-event Simulation; Dynamic Systems; Intelligent

Diagnostic Methodologies; Large Scale Multi-agent Systems; Multi-agent Simulation;

Visibility; Criticality

1. INTRODUCTION

Manufacturing processes have become more data-driven

and dependent on interconnection of multiple facilities for

eﬃcient decision-making, thus telecom infrastructure is as

pervasive a component of manufacturing industries as the

powergrid and other critical infrastructures. Telecom in-

frastructures are physical networks that support internet,

telephony, and other digital services by facilitating data

transfers between users. Infrastructures are represented by

graphs, with network elements, such as routers or switches,

as nodes and connections as edges. In such ﬂow networks,

data packet congestion at a node may cause node failure.

How to monitor the impact of disruptions such as conges-

tion on the network - criticality - is important for network

behaviour control, which must be accurate and fast in net-

works that are functioning near capacity, as expected for

future backbone networks (Moura and Hutchison (2019)).

In centralised network monitoring and criticality computa-

tion, the central computational resources need information

This research was supported by the EPSRC and BT Prosperity

Partnership project: Next Generation Converged Digital Infrastruc-

ture, grant number EP/R004935/1, and the UK Engineering and

Physical Sciences Research Council (EPSRC) Doctoral Training

Partnership Award for the University of Cambridge, grant number

EP/R513180/1.

on the whole network, creating a criticality measure (CM)

(Salazar et al. (2016); Fang and Zio (2013)), requiring live

and dynamic node topology and attribute data to respond

to behavioural shifts. The increased data cause longer

computational times, so conclusions arrived at a given

point in time lose relevance, and promote critical events if

too much data is transferred. The amount of data used for

a CM should thus be minimised while preserving meaning,

achievable by imposing a limited bound around a given

node, and computing criticality in a decentralised manner.

This shortens data paths and reduces complexity by taking

information from a small region around a given node. We

call these information bounded CMs (IB-CMs). This IB-

CM can be approximated with a centrality measure used

as a criticality estimate (CE), as both deﬁne importance

within a multicomponent system (Birnbaum (1968)). The

more accurate and more eﬃciently computed an IB-CM is,

the more valuable the information used in its calculation

is, with the value of information is a multivariate measure

composed of the accuracy and eﬃciency of a given IB-CM

under some information provision.

This paper builds on Proselkov et al. (2020) to outline

a method to assess the accuracy and computational eﬃ-

ciency of IB-CMs that change with time, with respect to

a novel benchmark estimate of dynamic criticality under

The value of information for dynamic

decentralised criticality computation

Yaniv Proselkov∗Manuel Herrera∗∗

Marco Perez Hernandez∗∗∗ Ajith Kumar Parlikad∗∗∗∗

Alexandra Brintrup†

Institute for Manufacturing, Dept. of Engineering, University of

Cambridge

∗yp289@cam.ac.uk; ∗∗amh226@cam.ac.uk; ∗∗∗mep53@cam.ac.uk;

∗∗∗∗aknp2@cam.ac.uk; †ab702@cam.ac.uk

Abstract: Smart manufacturing uses advanced data-driven solutions to improve performance

and operations resilience requiring large amounts of data delivered quickly, enabled by telecom

networks and network elements such as routers or switches. Disruptions can render a network

inoperable; avoiding them requires advanced responsiveness to network usage, achievable by

embedding autonomy into the network, providing fast and scalable algorithms that use key

metrics to manage disruptions, such as impact of failure in a network element on system

functions. Centralised approaches are insuﬃcient for this as they need time to transmit data

to the controller, by which time it may have become irrelevant. Decentralised and information

bounded measures solve this by placing computational agents near the data source. We propose

an agent-based model to assess the value of the information for calculating decentralised

criticality metrics, assigning a data collection agent to each network element, computing relevant

indicators of the impact of failure in a decentralised way. This is evaluated by simulating

discrete information exchange with concurrent data analysis, comparing measure accuracy to a

benchmark, and with measure computation time as a proxy for computation complexity. Results

show losses in accuracy are oﬀset by faster computations with fewer network dependencies.

Keywords: Computational Science; Discrete-event Simulation; Dynamic Systems; Intelligent

Diagnostic Methodologies; Large Scale Multi-agent Systems; Multi-agent Simulation;

Visibility; Criticality

1. INTRODUCTION

Manufacturing processes have become more data-driven

and dependent on interconnection of multiple facilities for

eﬃcient decision-making, thus telecom infrastructure is as

pervasive a component of manufacturing industries as the

powergrid and other critical infrastructures. Telecom in-

frastructures are physical networks that support internet,

telephony, and other digital services by facilitating data

transfers between users. Infrastructures are represented by

graphs, with network elements, such as routers or switches,

as nodes and connections as edges. In such ﬂow networks,

data packet congestion at a node may cause node failure.

How to monitor the impact of disruptions such as conges-

tion on the network - criticality - is important for network

behaviour control, which must be accurate and fast in net-

works that are functioning near capacity, as expected for

future backbone networks (Moura and Hutchison (2019)).

In centralised network monitoring and criticality computa-

tion, the central computational resources need information

This research was supported by the EPSRC and BT Prosperity

Partnership project: Next Generation Converged Digital Infrastruc-

ture, grant number EP/R004935/1, and the UK Engineering and

Physical Sciences Research Council (EPSRC) Doctoral Training

Partnership Award for the University of Cambridge, grant number

EP/R513180/1.

on the whole network, creating a criticality measure (CM)

(Salazar et al. (2016); Fang and Zio (2013)), requiring live

and dynamic node topology and attribute data to respond

to behavioural shifts. The increased data cause longer

computational times, so conclusions arrived at a given

point in time lose relevance, and promote critical events if

too much data is transferred. The amount of data used for

a CM should thus be minimised while preserving meaning,

achievable by imposing a limited bound around a given

node, and computing criticality in a decentralised manner.

This shortens data paths and reduces complexity by taking

information from a small region around a given node. We

call these information bounded CMs (IB-CMs). This IB-

CM can be approximated with a centrality measure used

as a criticality estimate (CE), as both deﬁne importance

within a multicomponent system (Birnbaum (1968)). The

more accurate and more eﬃciently computed an IB-CM is,

the more valuable the information used in its calculation

is, with the value of information is a multivariate measure

composed of the accuracy and eﬃciency of a given IB-CM

under some information provision.

This paper builds on Proselkov et al. (2020) to outline

a method to assess the accuracy and computational eﬃ-

ciency of IB-CMs that change with time, with respect to

a novel benchmark estimate of dynamic criticality under

The value of information for dynamic

decentralised criticality computation

Yaniv Proselkov∗Manuel Herrera∗∗

Marco Perez Hernandez∗∗∗ Ajith Kumar Parlikad∗∗∗∗

Alexandra Brintrup†

Institute for Manufacturing, Dept. of Engineering, University of

Cambridge

∗yp289@cam.ac.uk; ∗∗amh226@cam.ac.uk; ∗∗∗mep53@cam.ac.uk;

∗∗∗∗

aknp2@cam.ac.uk;

†

ab702@cam.ac.uk

Abstract: Smart manufacturing uses advanced data-driven solutions to improve performance

and operations resilience requiring large amounts of data delivered quickly, enabled by telecom

networks and network elements such as routers or switches. Disruptions can render a network

inoperable; avoiding them requires advanced responsiveness to network usage, achievable by

embedding autonomy into the network, providing fast and scalable algorithms that use key

metrics to manage disruptions, such as impact of failure in a network element on system

functions. Centralised approaches are insuﬃcient for this as they need time to transmit data

to the controller, by which time it may have become irrelevant. Decentralised and information

bounded measures solve this by placing computational agents near the data source. We propose

an agent-based model to assess the value of the information for calculating decentralised

criticality metrics, assigning a data collection agent to each network element, computing relevant

indicators of the impact of failure in a decentralised way. This is evaluated by simulating

discrete information exchange with concurrent data analysis, comparing measure accuracy to a

benchmark, and with measure computation time as a proxy for computation complexity. Results

show losses in accuracy are oﬀset by faster computations with fewer network dependencies.

Keywords: Computational Science; Discrete-event Simulation; Dynamic Systems; Intelligent

Diagnostic Methodologies; Large Scale Multi-agent Systems; Multi-agent Simulation;

Visibility; Criticality

1. INTRODUCTION

Manufacturing processes have become more data-driven

and dependent on interconnection of multiple facilities for

eﬃcient decision-making, thus telecom infrastructure is as

pervasive a component of manufacturing industries as the

powergrid and other critical infrastructures. Telecom in-

frastructures are physical networks that support internet,

telephony, and other digital services by facilitating data

transfers between users. Infrastructures are represented by

graphs, with network elements, such as routers or switches,

as nodes and connections as edges. In such ﬂow networks,

data packet congestion at a node may cause node failure.

How to monitor the impact of disruptions such as conges-

tion on the network - criticality - is important for network

behaviour control, which must be accurate and fast in net-

works that are functioning near capacity, as expected for

future backbone networks (Moura and Hutchison (2019)).

In centralised network monitoring and criticality computa-

tion, the central computational resources need information

This research was supported by the EPSRC and BT Prosperity

Partnership project: Next Generation Converged Digital Infrastruc-

ture, grant number EP/R004935/1, and the UK Engineering and

Physical Sciences Research Council (EPSRC) Doctoral Training

Partnership Award for the University of Cambridge, grant number

EP/R513180/1.

on the whole network, creating a criticality measure (CM)

(Salazar et al. (2016); Fang and Zio (2013)), requiring live

and dynamic node topology and attribute data to respond

to behavioural shifts. The increased data cause longer

computational times, so conclusions arrived at a given

point in time lose relevance, and promote critical events if

too much data is transferred. The amount of data used for

a CM should thus be minimised while preserving meaning,

achievable by imposing a limited bound around a given

node, and computing criticality in a decentralised manner.

This shortens data paths and reduces complexity by taking

information from a small region around a given node. We

call these information bounded CMs (IB-CMs). This IB-

CM can be approximated with a centrality measure used

as a criticality estimate (CE), as both deﬁne importance

within a multicomponent system (Birnbaum (1968)). The

more accurate and more eﬃciently computed an IB-CM is,

the more valuable the information used in its calculation

is, with the value of information is a multivariate measure

composed of the accuracy and eﬃciency of a given IB-CM

under some information provision.

This paper builds on Proselkov et al. (2020) to outline

a method to assess the accuracy and computational eﬃ-

ciency of IB-CMs that change with time, with respect to

a novel benchmark estimate of dynamic criticality under

The value of information for dynamic

decentralised criticality computation

Yaniv Proselkov

∗

Manuel Herrera

∗∗

Marco Perez Hernandez∗∗∗ Ajith Kumar Parlikad∗∗∗∗

Alexandra Brintrup

†

Institute for Manufacturing, Dept. of Engineering, University of

Cambridge

∗yp289@cam.ac.uk; ∗∗amh226@cam.ac.uk; ∗∗∗mep53@cam.ac.uk;

∗∗∗∗aknp2@cam.ac.uk; †ab702@cam.ac.uk

Abstract: Smart manufacturing uses advanced data-driven solutions to improve performance

and operations resilience requiring large amounts of data delivered quickly, enabled by telecom

networks and network elements such as routers or switches. Disruptions can render a network

inoperable; avoiding them requires advanced responsiveness to network usage, achievable by

embedding autonomy into the network, providing fast and scalable algorithms that use key

metrics to manage disruptions, such as impact of failure in a network element on system

functions. Centralised approaches are insuﬃcient for this as they need time to transmit data

to the controller, by which time it may have become irrelevant. Decentralised and information

bounded measures solve this by placing computational agents near the data source. We propose

an agent-based model to assess the value of the information for calculating decentralised

criticality metrics, assigning a data collection agent to each network element, computing relevant

indicators of the impact of failure in a decentralised way. This is evaluated by simulating

discrete information exchange with concurrent data analysis, comparing measure accuracy to a

benchmark, and with measure computation time as a proxy for computation complexity. Results

show losses in accuracy are oﬀset by faster computations with fewer network dependencies.

Keywords: Computational Science; Discrete-event Simulation; Dynamic Systems; Intelligent

Diagnostic Methodologies; Large Scale Multi-agent Systems; Multi-agent Simulation;

Visibility; Criticality

1. INTRODUCTION

Manufacturing processes have become more data-driven

and dependent on interconnection of multiple facilities for

eﬃcient decision-making, thus telecom infrastructure is as

pervasive a component of manufacturing industries as the

powergrid and other critical infrastructures. Telecom in-

frastructures are physical networks that support internet,

telephony, and other digital services by facilitating data

transfers between users. Infrastructures are represented by

graphs, with network elements, such as routers or switches,

as nodes and connections as edges. In such ﬂow networks,

data packet congestion at a node may cause node failure.

How to monitor the impact of disruptions such as conges-

tion on the network - criticality - is important for network

behaviour control, which must be accurate and fast in net-

works that are functioning near capacity, as expected for

future backbone networks (Moura and Hutchison (2019)).

In centralised network monitoring and criticality computa-

tion, the central computational resources need information

This research was supported by the EPSRC and BT Prosperity

Partnership project: Next Generation Converged Digital Infrastruc-

ture, grant number EP/R004935/1, and the UK Engineering and

Physical Sciences Research Council (EPSRC) Doctoral Training

Partnership Award for the University of Cambridge, grant number

EP/R513180/1.

on the whole network, creating a criticality measure (CM)

(Salazar et al. (2016); Fang and Zio (2013)), requiring live

and dynamic node topology and attribute data to respond

to behavioural shifts. The increased data cause longer

computational times, so conclusions arrived at a given

point in time lose relevance, and promote critical events if

too much data is transferred. The amount of data used for

a CM should thus be minimised while preserving meaning,

achievable by imposing a limited bound around a given

node, and computing criticality in a decentralised manner.

This shortens data paths and reduces complexity by taking

information from a small region around a given node. We

call these information bounded CMs (IB-CMs). This IB-

CM can be approximated with a centrality measure used

as a criticality estimate (CE), as both deﬁne importance

within a multicomponent system (Birnbaum (1968)). The

more accurate and more eﬃciently computed an IB-CM is,

the more valuable the information used in its calculation

is, with the value of information is a multivariate measure

composed of the accuracy and eﬃciency of a given IB-CM

under some information provision.

This paper builds on Proselkov et al. (2020) to outline

a method to assess the accuracy and computational eﬃ-

ciency of IB-CMs that change with time, with respect to

a novel benchmark estimate of dynamic criticality under

The value of information for dynamic

decentralised criticality computation

Yaniv Proselkov∗Manuel Herrera∗∗

Marco Perez Hernandez∗∗∗ Ajith Kumar Parlikad∗∗∗∗

Alexandra Brintrup†

Institute for Manufacturing, Dept. of Engineering, University of

Cambridge

∗yp289@cam.ac.uk; ∗∗amh226@cam.ac.uk; ∗∗∗mep53@cam.ac.uk;

∗∗∗∗aknp2@cam.ac.uk; †ab702@cam.ac.uk

Abstract: Smart manufacturing uses advanced data-driven solutions to improve performance

and operations resilience requiring large amounts of data delivered quickly, enabled by telecom

networks and network elements such as routers or switches. Disruptions can render a network

inoperable; avoiding them requires advanced responsiveness to network usage, achievable by

embedding autonomy into the network, providing fast and scalable algorithms that use key

metrics to manage disruptions, such as impact of failure in a network element on system

functions. Centralised approaches are insuﬃcient for this as they need time to transmit data

to the controller, by which time it may have become irrelevant. Decentralised and information

bounded measures solve this by placing computational agents near the data source. We propose

an agent-based model to assess the value of the information for calculating decentralised

criticality metrics, assigning a data collection agent to each network element, computing relevant

indicators of the impact of failure in a decentralised way. This is evaluated by simulating

discrete information exchange with concurrent data analysis, comparing measure accuracy to a

benchmark, and with measure computation time as a proxy for computation complexity. Results

show losses in accuracy are oﬀset by faster computations with fewer network dependencies.

Keywords: Computational Science; Discrete-event Simulation; Dynamic Systems; Intelligent

Diagnostic Methodologies; Large Scale Multi-agent Systems; Multi-agent Simulation;

Visibility; Criticality

1. INTRODUCTION

Manufacturing processes have become more data-driven

and dependent on interconnection of multiple facilities for

eﬃcient decision-making, thus telecom infrastructure is as

pervasive a component of manufacturing industries as the

powergrid and other critical infrastructures. Telecom in-

frastructures are physical networks that support internet,

telephony, and other digital services by facilitating data

transfers between users. Infrastructures are represented by

graphs, with network elements, such as routers or switches,

as nodes and connections as edges. In such ﬂow networks,

data packet congestion at a node may cause node failure.

How to monitor the impact of disruptions such as conges-

tion on the network - criticality - is important for network

behaviour control, which must be accurate and fast in net-

works that are functioning near capacity, as expected for

future backbone networks (Moura and Hutchison (2019)).

In centralised network monitoring and criticality computa-

tion, the central computational resources need information

This research was supported by the EPSRC and BT Prosperity

Partnership project: Next Generation Converged Digital Infrastruc-

ture, grant number EP/R004935/1, and the UK Engineering and

Physical Sciences Research Council (EPSRC) Doctoral Training

Partnership Award for the University of Cambridge, grant number

EP/R513180/1.

on the whole network, creating a criticality measure (CM)

(Salazar et al. (2016); Fang and Zio (2013)), requiring live

and dynamic node topology and attribute data to respond

to behavioural shifts. The increased data cause longer

computational times, so conclusions arrived at a given

point in time lose relevance, and promote critical events if

too much data is transferred. The amount of data used for

a CM should thus be minimised while preserving meaning,

achievable by imposing a limited bound around a given

node, and computing criticality in a decentralised manner.

This shortens data paths and reduces complexity by taking

information from a small region around a given node. We

call these information bounded CMs (IB-CMs). This IB-

CM can be approximated with a centrality measure used

as a criticality estimate (CE), as both deﬁne importance

within a multicomponent system (Birnbaum (1968)). The

more accurate and more eﬃciently computed an IB-CM is,

the more valuable the information used in its calculation

is, with the value of information is a multivariate measure

composed of the accuracy and eﬃciency of a given IB-CM

under some information provision.

This paper builds on Proselkov et al. (2020) to outline

a method to assess the accuracy and computational eﬃ-

ciency of IB-CMs that change with time, with respect to

a novel benchmark estimate of dynamic criticality under

Yaniv Proselkov et al. / IFAC PapersOnLine 55-2 (2022) 408–413 409

Copyright ©

2022 The Authors. This is an open access article under the CC BY-NC-ND license

(

https://creativecommons.org/licenses/by-nc-nd/4.0/

)

The value of information for dynamic

decentralised criticality computation

Yaniv Proselkov∗Manuel Herrera∗∗

Marco Perez Hernandez∗∗∗ Ajith Kumar Parlikad∗∗∗∗

Alexandra Brintrup†

Institute for Manufacturing, Dept. of Engineering, University of

Cambridge

∗yp289@cam.ac.uk; ∗∗amh226@cam.ac.uk; ∗∗∗mep53@cam.ac.uk;

∗∗∗∗aknp2@cam.ac.uk; †ab702@cam.ac.uk

Abstract: Smart manufacturing uses advanced data-driven solutions to improve performance

and operations resilience requiring large amounts of data delivered quickly, enabled by telecom

networks and network elements such as routers or switches. Disruptions can render a network

inoperable; avoiding them requires advanced responsiveness to network usage, achievable by

embedding autonomy into the network, providing fast and scalable algorithms that use key

metrics to manage disruptions, such as impact of failure in a network element on system

functions. Centralised approaches are insuﬃcient for this as they need time to transmit data

to the controller, by which time it may have become irrelevant. Decentralised and information

bounded measures solve this by placing computational agents near the data source. We propose

an agent-based model to assess the value of the information for calculating decentralised

criticality metrics, assigning a data collection agent to each network element, computing relevant

indicators of the impact of failure in a decentralised way. This is evaluated by simulating

discrete information exchange with concurrent data analysis, comparing measure accuracy to a

benchmark, and with measure computation time as a proxy for computation complexity. Results

show losses in accuracy are oﬀset by faster computations with fewer network dependencies.

Keywords: Computational Science; Discrete-event Simulation; Dynamic Systems; Intelligent

Diagnostic Methodologies; Large Scale Multi-agent Systems; Multi-agent Simulation;

Visibility; Criticality

1. INTRODUCTION

Manufacturing processes have become more data-driven

and dependent on interconnection of multiple facilities for

eﬃcient decision-making, thus telecom infrastructure is as

pervasive a component of manufacturing industries as the

powergrid and other critical infrastructures. Telecom in-

frastructures are physical networks that support internet,

telephony, and other digital services by facilitating data

transfers between users. Infrastructures are represented by

graphs, with network elements, such as routers or switches,

as nodes and connections as edges. In such ﬂow networks,

data packet congestion at a node may cause node failure.

How to monitor the impact of disruptions such as conges-

tion on the network - criticality - is important for network

behaviour control, which must be accurate and fast in net-

works that are functioning near capacity, as expected for

future backbone networks (Moura and Hutchison (2019)).

In centralised network monitoring and criticality computa-

tion, the central computational resources need information

This research was supported by the EPSRC and BT Prosperity

Partnership project: Next Generation Converged Digital Infrastruc-

ture, grant number EP/R004935/1, and the UK Engineering and

Physical Sciences Research Council (EPSRC) Doctoral Training

Partnership Award for the University of Cambridge, grant number

EP/R513180/1.

on the whole network, creating a criticality measure (CM)

(Salazar et al. (2016); Fang and Zio (2013)), requiring live

and dynamic node topology and attribute data to respond

to behavioural shifts. The increased data cause longer

computational times, so conclusions arrived at a given

point in time lose relevance, and promote critical events if

too much data is transferred. The amount of data used for

a CM should thus be minimised while preserving meaning,

achievable by imposing a limited bound around a given

node, and computing criticality in a decentralised manner.

This shortens data paths and reduces complexity by taking

information from a small region around a given node. We

call these information bounded CMs (IB-CMs). This IB-

CM can be approximated with a centrality measure used

as a criticality estimate (CE), as both deﬁne importance

within a multicomponent system (Birnbaum (1968)). The

more accurate and more eﬃciently computed an IB-CM is,

the more valuable the information used in its calculation

is, with the value of information is a multivariate measure

composed of the accuracy and eﬃciency of a given IB-CM

under some information provision.

This paper builds on Proselkov et al. (2020) to outline

a method to assess the accuracy and computational eﬃ-

ciency of IB-CMs that change with time, with respect to

a novel benchmark estimate of dynamic criticality under

The value of information for dynamic

decentralised criticality computation

Yaniv Proselkov∗Manuel Herrera∗∗

Marco Perez Hernandez∗∗∗ Ajith Kumar Parlikad∗∗∗∗

Alexandra Brintrup†

Institute for Manufacturing, Dept. of Engineering, University of

Cambridge

∗yp289@cam.ac.uk; ∗∗amh226@cam.ac.uk; ∗∗∗mep53@cam.ac.uk;

∗∗∗∗aknp2@cam.ac.uk; †ab702@cam.ac.uk

Abstract: Smart manufacturing uses advanced data-driven solutions to improve performance

and operations resilience requiring large amounts of data delivered quickly, enabled by telecom

networks and network elements such as routers or switches. Disruptions can render a network

inoperable; avoiding them requires advanced responsiveness to network usage, achievable by

embedding autonomy into the network, providing fast and scalable algorithms that use key

metrics to manage disruptions, such as impact of failure in a network element on system

functions. Centralised approaches are insuﬃcient for this as they need time to transmit data

to the controller, by which time it may have become irrelevant. Decentralised and information

bounded measures solve this by placing computational agents near the data source. We propose

an agent-based model to assess the value of the information for calculating decentralised

criticality metrics, assigning a data collection agent to each network element, computing relevant

indicators of the impact of failure in a decentralised way. This is evaluated by simulating

discrete information exchange with concurrent data analysis, comparing measure accuracy to a

benchmark, and with measure computation time as a proxy for computation complexity. Results

show losses in accuracy are oﬀset by faster computations with fewer network dependencies.

Keywords: Computational Science; Discrete-event Simulation; Dynamic Systems; Intelligent

Diagnostic Methodologies; Large Scale Multi-agent Systems; Multi-agent Simulation;

Visibility; Criticality

1. INTRODUCTION

Manufacturing processes have become more data-driven

and dependent on interconnection of multiple facilities for

eﬃcient decision-making, thus telecom infrastructure is as

pervasive a component of manufacturing industries as the

powergrid and other critical infrastructures. Telecom in-

frastructures are physical networks that support internet,

telephony, and other digital services by facilitating data

transfers between users. Infrastructures are represented by

graphs, with network elements, such as routers or switches,

as nodes and connections as edges. In such ﬂow networks,

data packet congestion at a node may cause node failure.

How to monitor the impact of disruptions such as conges-

tion on the network - criticality - is important for network

behaviour control, which must be accurate and fast in net-

works that are functioning near capacity, as expected for

future backbone networks (Moura and Hutchison (2019)).

In centralised network monitoring and criticality computa-

tion, the central computational resources need information

This research was supported by the EPSRC and BT Prosperity

Partnership project: Next Generation Converged Digital Infrastruc-

ture, grant number EP/R004935/1, and the UK Engineering and

Physical Sciences Research Council (EPSRC) Doctoral Training

Partnership Award for the University of Cambridge, grant number

EP/R513180/1.

on the whole network, creating a criticality measure (CM)

(Salazar et al. (2016); Fang and Zio (2013)), requiring live

and dynamic node topology and attribute data to respond

to behavioural shifts. The increased data cause longer

computational times, so conclusions arrived at a given

point in time lose relevance, and promote critical events if

too much data is transferred. The amount of data used for

a CM should thus be minimised while preserving meaning,

achievable by imposing a limited bound around a given

node, and computing criticality in a decentralised manner.

This shortens data paths and reduces complexity by taking

information from a small region around a given node. We

call these information bounded CMs (IB-CMs). This IB-

CM can be approximated with a centrality measure used

as a criticality estimate (CE), as both deﬁne importance

within a multicomponent system (Birnbaum (1968)). The

more accurate and more eﬃciently computed an IB-CM is,

the more valuable the information used in its calculation

is, with the value of information is a multivariate measure

composed of the accuracy and eﬃciency of a given IB-CM

under some information provision.

This paper builds on Proselkov et al. (2020) to outline

a method to assess the accuracy and computational eﬃ-

ciency of IB-CMs that change with time, with respect to

a novel benchmark estimate of dynamic criticality under

The value of information for dynamic

decentralised criticality computation

Yaniv Proselkov∗Manuel Herrera∗∗

Marco Perez Hernandez∗∗∗ Ajith Kumar Parlikad∗∗∗∗

Alexandra Brintrup†

Institute for Manufacturing, Dept. of Engineering, University of

Cambridge

∗yp289@cam.ac.uk; ∗∗amh226@cam.ac.uk; ∗∗∗mep53@cam.ac.uk;

∗∗∗∗aknp2@cam.ac.uk; †ab702@cam.ac.uk

Abstract: Smart manufacturing uses advanced data-driven solutions to improve performance

and operations resilience requiring large amounts of data delivered quickly, enabled by telecom

networks and network elements such as routers or switches. Disruptions can render a network

inoperable; avoiding them requires advanced responsiveness to network usage, achievable by

embedding autonomy into the network, providing fast and scalable algorithms that use key

metrics to manage disruptions, such as impact of failure in a network element on system

functions. Centralised approaches are insuﬃcient for this as they need time to transmit data

to the controller, by which time it may have become irrelevant. Decentralised and information

bounded measures solve this by placing computational agents near the data source. We propose

an agent-based model to assess the value of the information for calculating decentralised

criticality metrics, assigning a data collection agent to each network element, computing relevant

indicators of the impact of failure in a decentralised way. This is evaluated by simulating

discrete information exchange with concurrent data analysis, comparing measure accuracy to a

benchmark, and with measure computation time as a proxy for computation complexity. Results

show losses in accuracy are oﬀset by faster computations with fewer network dependencies.

Keywords: Computational Science; Discrete-event Simulation; Dynamic Systems; Intelligent

Diagnostic Methodologies; Large Scale Multi-agent Systems; Multi-agent Simulation;

Visibility; Criticality

1. INTRODUCTION

Manufacturing processes have become more data-driven

and dependent on interconnection of multiple facilities for

eﬃcient decision-making, thus telecom infrastructure is as

pervasive a component of manufacturing industries as the

powergrid and other critical infrastructures. Telecom in-

frastructures are physical networks that support internet,

telephony, and other digital services by facilitating data

transfers between users. Infrastructures are represented by

graphs, with network elements, such as routers or switches,

as nodes and connections as edges. In such ﬂow networks,

data packet congestion at a node may cause node failure.

How to monitor the impact of disruptions such as conges-

tion on the network - criticality - is important for network

behaviour control, which must be accurate and fast in net-

works that are functioning near capacity, as expected for

future backbone networks (Moura and Hutchison (2019)).

In centralised network monitoring and criticality computa-

tion, the central computational resources need information

This research was supported by the EPSRC and BT Prosperity

Partnership project: Next Generation Converged Digital Infrastruc-

ture, grant number EP/R004935/1, and the UK Engineering and

Physical Sciences Research Council (EPSRC) Doctoral Training

Partnership Award for the University of Cambridge, grant number

EP/R513180/1.

on the whole network, creating a criticality measure (CM)

(Salazar et al. (2016); Fang and Zio (2013)), requiring live

and dynamic node topology and attribute data to respond

to behavioural shifts. The increased data cause longer

computational times, so conclusions arrived at a given

point in time lose relevance, and promote critical events if

too much data is transferred. The amount of data used for

a CM should thus be minimised while preserving meaning,

achievable by imposing a limited bound around a given

node, and computing criticality in a decentralised manner.

This shortens data paths and reduces complexity by taking

information from a small region around a given node. We

call these information bounded CMs (IB-CMs). This IB-

CM can be approximated with a centrality measure used

as a criticality estimate (CE), as both deﬁne importance

within a multicomponent system (Birnbaum (1968)). The

more accurate and more eﬃciently computed an IB-CM is,

the more valuable the information used in its calculation

is, with the value of information is a multivariate measure

composed of the accuracy and eﬃciency of a given IB-CM

under some information provision.

This paper builds on Proselkov et al. (2020) to outline

a method to assess the accuracy and computational eﬃ-

ciency of IB-CMs that change with time, with respect to

a novel benchmark estimate of dynamic criticality under

The value of information for dynamic

decentralised criticality computation

Yaniv Proselkov∗Manuel Herrera∗∗

Marco Perez Hernandez∗∗∗ Ajith Kumar Parlikad∗∗∗∗

Alexandra Brintrup†

Institute for Manufacturing, Dept. of Engineering, University of

Cambridge

∗yp289@cam.ac.uk; ∗∗amh226@cam.ac.uk; ∗∗∗mep53@cam.ac.uk;

∗∗∗∗aknp2@cam.ac.uk; †ab702@cam.ac.uk

Abstract: Smart manufacturing uses advanced data-driven solutions to improve performance

and operations resilience requiring large amounts of data delivered quickly, enabled by telecom

networks and network elements such as routers or switches. Disruptions can render a network

inoperable; avoiding them requires advanced responsiveness to network usage, achievable by

embedding autonomy into the network, providing fast and scalable algorithms that use key

metrics to manage disruptions, such as impact of failure in a network element on system

functions. Centralised approaches are insuﬃcient for this as they need time to transmit data

to the controller, by which time it may have become irrelevant. Decentralised and information

bounded measures solve this by placing computational agents near the data source. We propose

an agent-based model to assess the value of the information for calculating decentralised

criticality metrics, assigning a data collection agent to each network element, computing relevant

indicators of the impact of failure in a decentralised way. This is evaluated by simulating

discrete information exchange with concurrent data analysis, comparing measure accuracy to a

benchmark, and with measure computation time as a proxy for computation complexity. Results

show losses in accuracy are oﬀset by faster computations with fewer network dependencies.

Keywords: Computational Science; Discrete-event Simulation; Dynamic Systems; Intelligent

Diagnostic Methodologies; Large Scale Multi-agent Systems; Multi-agent Simulation;

Visibility; Criticality

1. INTRODUCTION

Manufacturing processes have become more data-driven

and dependent on interconnection of multiple facilities for

eﬃcient decision-making, thus telecom infrastructure is as

pervasive a component of manufacturing industries as the

powergrid and other critical infrastructures. Telecom in-

frastructures are physical networks that support internet,

telephony, and other digital services by facilitating data

transfers between users. Infrastructures are represented by

graphs, with network elements, such as routers or switches,

as nodes and connections as edges. In such ﬂow networks,

data packet congestion at a node may cause node failure.

How to monitor the impact of disruptions such as conges-

tion on the network - criticality - is important for network

behaviour control, which must be accurate and fast in net-

works that are functioning near capacity, as expected for

future backbone networks (Moura and Hutchison (2019)).

In centralised network monitoring and criticality computa-

tion, the central computational resources need information

This research was supported by the EPSRC and BT Prosperity

Partnership project: Next Generation Converged Digital Infrastruc-

ture, grant number EP/R004935/1, and the UK Engineering and

Physical Sciences Research Council (EPSRC) Doctoral Training

Partnership Award for the University of Cambridge, grant number

EP/R513180/1.

on the whole network, creating a criticality measure (CM)

(Salazar et al. (2016); Fang and Zio (2013)), requiring live

and dynamic node topology and attribute data to respond

to behavioural shifts. The increased data cause longer

computational times, so conclusions arrived at a given

point in time lose relevance, and promote critical events if

too much data is transferred. The amount of data used for

a CM should thus be minimised while preserving meaning,

achievable by imposing a limited bound around a given

node, and computing criticality in a decentralised manner.

This shortens data paths and reduces complexity by taking

information from a small region around a given node. We

call these information bounded CMs (IB-CMs). This IB-

CM can be approximated with a centrality measure used

as a criticality estimate (CE), as both deﬁne importance

within a multicomponent system (Birnbaum (1968)). The

more accurate and more eﬃciently computed an IB-CM is,

the more valuable the information used in its calculation

is, with the value of information is a multivariate measure

composed of the accuracy and eﬃciency of a given IB-CM

under some information provision.

This paper builds on Proselkov et al. (2020) to outline

a method to assess the accuracy and computational eﬃ-

ciency of IB-CMs that change with time, with respect to

a novel benchmark estimate of dynamic criticality under

The value of information for dynamic

decentralised criticality computation

Yaniv Proselkov∗Manuel Herrera∗∗

Marco Perez Hernandez∗∗∗ Ajith Kumar Parlikad∗∗∗∗

Alexandra Brintrup†

Institute for Manufacturing, Dept. of Engineering, University of

Cambridge

∗yp289@cam.ac.uk; ∗∗amh226@cam.ac.uk; ∗∗∗mep53@cam.ac.uk;

∗∗∗∗aknp2@cam.ac.uk; †ab702@cam.ac.uk

Abstract: Smart manufacturing uses advanced data-driven solutions to improve performance

and operations resilience requiring large amounts of data delivered quickly, enabled by telecom

networks and network elements such as routers or switches. Disruptions can render a network

inoperable; avoiding them requires advanced responsiveness to network usage, achievable by

embedding autonomy into the network, providing fast and scalable algorithms that use key

metrics to manage disruptions, such as impact of failure in a network element on system

functions. Centralised approaches are insuﬃcient for this as they need time to transmit data

to the controller, by which time it may have become irrelevant. Decentralised and information

bounded measures solve this by placing computational agents near the data source. We propose

an agent-based model to assess the value of the information for calculating decentralised

criticality metrics, assigning a data collection agent to each network element, computing relevant

indicators of the impact of failure in a decentralised way. This is evaluated by simulating

discrete information exchange with concurrent data analysis, comparing measure accuracy to a

benchmark, and with measure computation time as a proxy for computation complexity. Results

show losses in accuracy are oﬀset by faster computations with fewer network dependencies.

Keywords: Computational Science; Discrete-event Simulation; Dynamic Systems; Intelligent

Diagnostic Methodologies; Large Scale Multi-agent Systems; Multi-agent Simulation;

Visibility; Criticality

1. INTRODUCTION

Manufacturing processes have become more data-driven

and dependent on interconnection of multiple facilities for

eﬃcient decision-making, thus telecom infrastructure is as

pervasive a component of manufacturing industries as the

powergrid and other critical infrastructures. Telecom in-

frastructures are physical networks that support internet,

telephony, and other digital services by facilitating data

transfers between users. Infrastructures are represented by

graphs, with network elements, such as routers or switches,

as nodes and connections as edges. In such ﬂow networks,

data packet congestion at a node may cause node failure.

How to monitor the impact of disruptions such as conges-

tion on the network - criticality - is important for network

behaviour control, which must be accurate and fast in net-

works that are functioning near capacity, as expected for

future backbone networks (Moura and Hutchison (2019)).

In centralised network monitoring and criticality computa-

tion, the central computational resources need information

This research was supported by the EPSRC and BT Prosperity

Partnership project: Next Generation Converged Digital Infrastruc-

ture, grant number EP/R004935/1, and the UK Engineering and

Physical Sciences Research Council (EPSRC) Doctoral Training

Partnership Award for the University of Cambridge, grant number

EP/R513180/1.

on the whole network, creating a criticality measure (CM)

(Salazar et al. (2016); Fang and Zio (2013)), requiring live

and dynamic node topology and attribute data to respond

to behavioural shifts. The increased data cause longer

computational times, so conclusions arrived at a given

point in time lose relevance, and promote critical events if

too much data is transferred. The amount of data used for

a CM should thus be minimised while preserving meaning,

achievable by imposing a limited bound around a given

node, and computing criticality in a decentralised manner.

This shortens data paths and reduces complexity by taking

information from a small region around a given node. We

call these information bounded CMs (IB-CMs). This IB-

CM can be approximated with a centrality measure used

as a criticality estimate (CE), as both deﬁne importance

within a multicomponent system (Birnbaum (1968)). The

more accurate and more eﬃciently computed an IB-CM is,

the more valuable the information used in its calculation

is, with the value of information is a multivariate measure

composed of the accuracy and eﬃciency of a given IB-CM

under some information provision.

This paper builds on Proselkov et al. (2020) to outline

a method to assess the accuracy and computational eﬃ-

ciency of IB-CMs that change with time, with respect to

a novel benchmark estimate of dynamic criticality under

The value of information for dynamic

decentralised criticality computation

Yaniv Proselkov∗Manuel Herrera∗∗

Marco Perez Hernandez∗∗∗ Ajith Kumar Parlikad∗∗∗∗

Alexandra Brintrup†

Institute for Manufacturing, Dept. of Engineering, University of

Cambridge

∗yp289@cam.ac.uk; ∗∗amh226@cam.ac.uk; ∗∗∗mep53@cam.ac.uk;

∗∗∗∗aknp2@cam.ac.uk; †ab702@cam.ac.uk

Abstract: Smart manufacturing uses advanced data-driven solutions to improve performance

and operations resilience requiring large amounts of data delivered quickly, enabled by telecom

networks and network elements such as routers or switches. Disruptions can render a network

inoperable; avoiding them requires advanced responsiveness to network usage, achievable by

embedding autonomy into the network, providing fast and scalable algorithms that use key

metrics to manage disruptions, such as impact of failure in a network element on system

functions. Centralised approaches are insuﬃcient for this as they need time to transmit data

to the controller, by which time it may have become irrelevant. Decentralised and information

bounded measures solve this by placing computational agents near the data source. We propose

an agent-based model to assess the value of the information for calculating decentralised

criticality metrics, assigning a data collection agent to each network element, computing relevant

indicators of the impact of failure in a decentralised way. This is evaluated by simulating

discrete information exchange with concurrent data analysis, comparing measure accuracy to a

benchmark, and with measure computation time as a proxy for computation complexity. Results

show losses in accuracy are oﬀset by faster computations with fewer network dependencies.

Keywords: Computational Science; Discrete-event Simulation; Dynamic Systems; Intelligent

Diagnostic Methodologies; Large Scale Multi-agent Systems; Multi-agent Simulation;

Visibility; Criticality

1. INTRODUCTION

Manufacturing processes have become more data-driven

and dependent on interconnection of multiple facilities for

eﬃcient decision-making, thus telecom infrastructure is as

pervasive a component of manufacturing industries as the

powergrid and other critical infrastructures. Telecom in-

frastructures are physical networks that support internet,

telephony, and other digital services by facilitating data

transfers between users. Infrastructures are represented by

graphs, with network elements, such as routers or switches,

as nodes and connections as edges. In such ﬂow networks,

data packet congestion at a node may cause node failure.

How to monitor the impact of disruptions such as conges-

tion on the network - criticality - is important for network

behaviour control, which must be accurate and fast in net-

works that are functioning near capacity, as expected for

future backbone networks (Moura and Hutchison (2019)).

In centralised network monitoring and criticality computa-

tion, the central computational resources need information

This research was supported by the EPSRC and BT Prosperity

Partnership project: Next Generation Converged Digital Infrastruc-

ture, grant number EP/R004935/1, and the UK Engineering and

Physical Sciences Research Council (EPSRC) Doctoral Training

Partnership Award for the University of Cambridge, grant number

EP/R513180/1.

on the whole network, creating a criticality measure (CM)

(Salazar et al. (2016); Fang and Zio (2013)), requiring live

and dynamic node topology and attribute data to respond

to behavioural shifts. The increased data cause longer

computational times, so conclusions arrived at a given

point in time lose relevance, and promote critical events if

too much data is transferred. The amount of data used for

a CM should thus be minimised while preserving meaning,

achievable by imposing a limited bound around a given

node, and computing criticality in a decentralised manner.

This shortens data paths and reduces complexity by taking

information from a small region around a given node. We

call these information bounded CMs (IB-CMs). This IB-

CM can be approximated with a centrality measure used

as a criticality estimate (CE), as both deﬁne importance

within a multicomponent system (Birnbaum (1968)). The

more accurate and more eﬃciently computed an IB-CM is,

the more valuable the information used in its calculation

is, with the value of information is a multivariate measure

composed of the accuracy and eﬃciency of a given IB-CM

under some information provision.

This paper builds on Proselkov et al. (2020) to outline

a method to assess the accuracy and computational eﬃ-

ciency of IB-CMs that change with time, with respect to

a novel benchmark estimate of dynamic criticality under

diﬀerent communication paradigms (CPs). These IB-CMs

are designed for homogeneous ﬂow networks. A prototype

is presented that uses classic centrality measures as a stand

in for CMs and IB-CMs, with real network topology on the

use case of a telecom simulation model.

2. LITERATURE REVIEW

Network topology aﬀects routing and resilience to disrup-

tion since shorter distances give quicker transfers. Criti-

cality, deﬁned as the impact of a node’s inactivity on the

operation of a network, evaluated by network connectivity

in telecoms, is a key factor in understanding network re-

silience (L¨u et al. (2016); Herrera et al. (2020)), and can be

estimated using the current network state (Proselkov et al.

(2020)). Criticality can inform prioritisation in network

prognostics for proactive maintenance. Many criticality

measures are based on centrality measures, including be-

tweenness (Freeman (1977)); eigencentrality; and degree

centrality. The ﬁrst two are centralised, needing each node

to take information from all nodes. Degree centrality needs

each node to know the number of their neighbours.

Eﬃcient decentralised computation approaches for under-

standing network criticality are important for networks

operating under stress, (Cetinkaya and Sterbenz (2013)).

Cascade failure may also occur within regularly functional

systems due to random errors, as in January 1990, where

114 switching nodes of the AT&T network successively

went down due to a wrong reset signal (Neumann (1995)).

Nodes within telecoms networks provide information of

their state either by transmitting to a supervisory node,

which facilitates centralised centrality calculation, or with

each other, which facilitates distributed centrality cal-

culation. They can achieve decentralised communication

through broadcasting to all neighbours their node ID, the

value, and topological information including the travel

history of the data packet, and previously broadcasted

packets that are known to remain in motion, (Lehmann

and Kaufmann (2003)). This takes at least the minimum

distance between two nodes to be completed in reality.

Experimental evidence suggests increased computational

eﬃciency and satisfactory performance of information

bounded network measures as in Ercsey-Ravasz and

Toroczkai (2010). This details the relationship of the depth

of the information bound and size of the value distribution

of the associated bounded betweenness measures. The

value distribution increases exponentially with the depth

of the bound up to mean geodesic length before decreasing,

suggesting meaningful sensitivity at the mean geodesic

length. Tests were conducted on scale-free and random

graphs. These only have one cluster, so it is expected that

the ideal depth may be the mean cluster geodesic length.

Other papers give examples of limited range criticality

and centrality for static measures (Wehmuth and Ziviani

(2011); Chen et al. (2012); Nanda and Kotz (2008); Ker-

marrec et al. (2011); Dinh et al. (2010); Proselkov et al.

(2020)) and for dynamic distributed criticality measures.

All show accuracy despite limited boundaries. However,

no large scale analysis on the relative eﬃciency via com-

putation time and accuracy has not yet been conducted

for dynamic criticality measures.

3. METHODS

3.1 Telecom simulation model

The network topology is generated, creating the graph,

G=(V,E) where Vis nodes and Eis edges. The discrete

information packet exchange simulator is then run 1with

short range dependence, meaning random nodes generate

data packets independently according to a Poisson distri-

bution with random destinations (Veres and Boda (2005)).

As this is an information ﬂow network, a timestep is how

long it takes for information to traverse one edge. Packets

traverse the network, stored and routed along nodes on

the way to their destination, where they are removed from

the system. Nodes and edges each have a ﬁxed capacity

which gets ﬁlled up over time since it takes time to process

packets at nodes and transmit them between nodes, with

processing and transmission time as ﬁxed model input

parameters. This process is terminated after either a ﬁxed

number of timesteps or until the network is too congested

to function. The nodes each have a backlog capacity of

φ. The simulation produces a time series over Tof each

node’s queued up data packet backlog, where the size of

the queue held by node u∈Vat time t∈Tis φt

u.

We then examine how nodes would behave if they were

receiving and processing network data in real time in an

agent based simulation 2, an independent agent situated at

each node. Depending on our monitoring data CP, which

determines how up-to-date information is (currentness),

diﬀerent nodes get diﬀerent information regarding others

depending on their position in the network. We investigate

three CPs, named instant,constant, and periodic.

For a pair of nodes, u, v ∈V, there is a path puv ⊆

Gfrom uto vif, for some n∈Z+, there exists an

ordered sequence of nodes, (u, (ui)n

i=0,v)⊆Vwith edges

{uu0,(uiui+1)n−1

i=0 ,u

nv}⊆E, or uv ∈E. If there is a path

from uto v,ugets information about φt

v. The queue at

time tthat the agent at ubelieves vhas is the perceived

queue,qt

uv ∈Q. If u=vthis is qt

u. Centrality calculation

takes time so the time from transmission to output is

always greater than transmission to receipt between nodes,

thus nodes must compute centralities at a lower frequency

than the CP dictates to avoid losing currentness. This

period between calculations is µ, the monitor interval.

Instant communication is a simpliﬁcation, assuming mon-

itoring data is transferred instantly, where for all u, v ∈V,

qt

uv =qt

v. This is a base case, only achievable if monitoring

data transfer became so fast as to be insigniﬁcant.

Constant communication has nodes declare their queues

every timestep, This data traverses the network normally,

because this declaration is a low bandwidth operation.

For shortest path (geodesic) of length nbetween uand

v, information from vtakes ntimesteps to reach u, so u

perceives vntimesteps late, so qt

uv =qt−n

v, so distant

nodes give less accurate, valuable or relevant information.

Periodic communication has nodes declare their queues

and perceived queues at the same frequency as they calcu-

late their centrality. This corresponds to some aggregation

1using a Python package called “Anx” (Likic and Shaﬁ (2018)),

2using a Python package called “Mesa”, (Kazil et al. (2020))

410 Yaniv Proselkov et al. / IFAC PapersOnLine 55-2 (2022) 408–413

of the functions within higher order control functions that

use centralities as inputs. For monitor interval µand uand

vwith length ngeodesic, qt

uv =qt−µn

v, as each queue pass

is µtimesteps.

With each CP, nodes receive perceived queues of others in

the network. These values are used to inform dynamic,

queue dependant CEs, which are calculated with the

adjacency matrix, and so with respect to edge weight

rather than node weight, which we assign according to the

following steps. First the graph is redeﬁned as directed,

such that for (uv),(vu)∈E,(uv)=(vu). For a node u,

for all v∈Γ1(u), the weight of edge (uv) is

(uv)t

q=qt

u/|Γ1(u)|,(1)

because larger neighbourhoods give nodes more chances to

emit data packets and distribute load among them, and it

accounts for the respective queues of node pairs since for

all (uv) there exists a (vu) produced under the same rules.

The following subsection describes the data analysis car-

ried out with the data provided according to each CP.

3.2 Centrality Measure CEs

According to each of the above CPs, the data delivered to

each agent situated at a node is used to compute centrality

measures over time as proxies and estimates of criticality.

In this initial study standard centrality measures are used,

with weighted and bounded extensions. The unweighted

measures only take topological data, whereas weighted

measures adjust their outputs according to the perceived

queues for each node. Unbounded, or sociocentric, mea-

sures take information from the whole network and stand

in for CMs, while bounded, or egocentric measures take

information from a limited region around a given node

and stand in for IB-CMs.

We deﬁne the information boundary around a node by

the geodesic distance. For a node, u∈V, the set of

nodes iedges away is Γi(u)⊂V, where Γ1(u) is the

neighbourhood of u. The set of nodes at most iedges away

from uis Hi(u)=i

j=1 Γj(u). If uhas an information

boundary at distance i, it takes information from Hi(u).

Degree Centrality: Unweighted Degree Centrality for a

node ucounts the number of neighbours. It is deﬁned as

Cu

d(u)t=Cu

d(u)=|Γ1(u)|.

Weighted Degree Centrality counts each node as many

times as their perceived queue lengths. It is dynamic and

deﬁned as Cw

d(u)t=v∈Γ1(u)qt

uv.

Betweenness Centrality: All distinct paths with the

same length and the minimum number of elements are

geodesics. The number of geodesics from vto wis ρv,w :

V→Z+, and the number of geodesics from vto wpassing

through uis ρv,w|u:V→Z+.

Unweighted Sociocentric Betweenness Centrality (Freeman

(1977)), tracks pathway disruption potential. It is static,

calculating the fraction of shortest paths between all node

pairs passing through the subject node.

Unweighted Egocentric Betweenness Centrality measures

the betweenness of a bounded region surrounding a node.

It correlates strongly with sociocentric betweenness (Mars-

den (2002)), and is computable in a decentralised manner.

For node u∈Vit measures the betweenness of the induced

subgraph of Hi(u), such that

Cue

b(u)t=Cue

b(u)=v,w∈Hi(u)ρv,w |u/ρv,w .

Weighted Sociocentric Betweenness Centrality uses a

weighted shortest path parameter. Pvw is the set of short-

est paths between nodes vand w, and using Eqn. (1) the

CE is deﬁned as

ωt

v,w =pvw∈Pvw (st)∈pvw ,(st)∈E(st)t

q;

ωt

v,w|u=u∈pvw ∈Pv,w (st)∈pvw ,(st)∈E(st)t

q,(2)

giving weighted sociocentric betweenness centrality as

Cws

b(u)t=v,w∈V ,u=v=wωt

v,w|u/ωt

v,w .(3)

Weighted Egocentric Betweenness Centrality takes Eqn.

(3) but over Hi(u).

Eigencentrality: Unweighted Sociocentric Eigencentral-

ity captures the connectivity of the network, valuing nodes

with more connections to well connected nodes. The num-

ber of edges between nodes uiand ujis ai,j , displayable in

a matrix, AG∈Mn({0,1}), the adjacency matrix, where

AG=(ai,j )=1,(i, j)∈E

0,i=j,

for the matrix, G. The eigencentralities of the nodes in the

network are found for the largest eigenvalue, λG, with

AGx=λGx,(4)

and the CE is the solution to Eqn. (4), numerically solved

via power iteration, or Von Mises iteration (von Mises and

Pollaczek-Geiringer (1929)).

Unweighted Egocentric Eigencentrality is the solution to

Eqn. (4) over Hi(u) rather than over G.

Weighted Sociocentric Eigencentrality uses the directed

network with edge weights as deﬁned by Eqn. (1). The

adjacency matrix becomes dynamic and temporally de-

pendant, such that for At

G∈Mn(Z+),

At

G=(ai,j )=(uiuj)t

q,(i, j)∈E

0,i=j. (5)

AGin Eqn. (4) is then replaced by At

Gin Eqn. (5).

Weighted Egocentric Eigencentrality uses At

Gfrom Eqn.

(5). For node ujit is over Hi(uj), not G, creating

Cwe

e(uj)t=(At

Hi(u)x)j=(λt

Hi(u)x)j.

These CEs will be used as proxies for criticality measures.

To compute the value of information as processed through

each measure, we now outline a validation method.

3.3 Validation Method

The measures above must be validated as correctly ap-

proximating dynamic criticality within the network. A

validation function must determine at any timestep the

similarity of our CE to a benchmark and its period of

relevance. Criticality measures the impact of failure, which

must be deﬁned, and for how long the eﬀects of some action

can be said to have been caused by a previous one. Analysis

is conducted post hoc, using data that is neither limited by

Yaniv Proselkov et al. / IFAC PapersOnLine 55-2 (2022) 408–413 411

of the functions within higher order control functions that

use centralities as inputs. For monitor interval µand uand

vwith length ngeodesic, qt

uv =qt−µn

v, as each queue pass

is µtimesteps.

With each CP, nodes receive perceived queues of others in

the network. These values are used to inform dynamic,

queue dependant CEs, which are calculated with the

adjacency matrix, and so with respect to edge weight

rather than node weight, which we assign according to the

following steps. First the graph is redeﬁned as directed,

such that for (uv),(vu)∈E,(uv)=(vu). For a node u,

for all v∈Γ1(u), the weight of edge (uv) is

(uv)t

q=qt

u/|Γ1(u)|,(1)

because larger neighbourhoods give nodes more chances to

emit data packets and distribute load among them, and it

accounts for the respective queues of node pairs since for

all (uv) there exists a (vu) produced under the same rules.

The following subsection describes the data analysis car-

ried out with the data provided according to each CP.

3.2 Centrality Measure CEs

According to each of the above CPs, the data delivered to

each agent situated at a node is used to compute centrality

measures over time as proxies and estimates of criticality.

In this initial study standard centrality measures are used,

with weighted and bounded extensions. The unweighted

measures only take topological data, whereas weighted

measures adjust their outputs according to the perceived

queues for each node. Unbounded, or sociocentric, mea-

sures take information from the whole network and stand

in for CMs, while bounded, or egocentric measures take

information from a limited region around a given node

and stand in for IB-CMs.

We deﬁne the information boundary around a node by

the geodesic distance. For a node, u∈V, the set of

nodes iedges away is Γi(u)⊂V, where Γ1(u) is the

neighbourhood of u. The set of nodes at most iedges away

from uis Hi(u)=i

j=1 Γj(u). If uhas an information

boundary at distance i, it takes information from Hi(u).

Degree Centrality: Unweighted Degree Centrality for a

node ucounts the number of neighbours. It is deﬁned as

Cu

d(u)t=Cu

d(u)=|Γ1(u)|.

Weighted Degree Centrality counts each node as many

times as their perceived queue lengths. It is dynamic and

deﬁned as Cw

d(u)t=v∈Γ1(u)qt

uv.

Betweenness Centrality: All distinct paths with the

same length and the minimum number of elements are

geodesics. The number of geodesics from vto wis ρv,w :

V→Z+, and the number of geodesics from vto wpassing

through uis ρv,w|u:V→Z+.

Unweighted Sociocentric Betweenness Centrality (Freeman

(1977)), tracks pathway disruption potential. It is static,

calculating the fraction of shortest paths between all node

pairs passing through the subject node.

Unweighted Egocentric Betweenness Centrality measures

the betweenness of a bounded region surrounding a node.

It correlates strongly with sociocentric betweenness (Mars-

den (2002)), and is computable in a decentralised manner.

For node u∈Vit measures the betweenness of the induced

subgraph of Hi(u), such that

Cue

b(u)t=Cue

b(u)=v,w∈Hi(u)ρv,w |u/ρv,w .

Weighted Sociocentric Betweenness Centrality uses a

weighted shortest path parameter. Pvw is the set of short-

est paths between nodes vand w, and using Eqn. (1) the

CE is deﬁned as

ωt

v,w =pvw∈Pvw (st)∈pvw ,(st)∈E(st)t

q;

ωt

v,w|u=u∈pvw ∈Pv,w (st)∈pvw ,(st)∈E(st)t

q,(2)

giving weighted sociocentric betweenness centrality as

Cws

b(u)t=v,w∈V ,u=v=wωt

v,w|u/ωt

v,w .(3)

Weighted Egocentric Betweenness Centrality takes Eqn.

(3) but over Hi(u).

Eigencentrality: Unweighted Sociocentric Eigencentral-

ity captures the connectivity of the network, valuing nodes

with more connections to well connected nodes. The num-

ber of edges between nodes uiand ujis ai,j , displayable in

a matrix, AG∈Mn({0,1}), the adjacency matrix, where

AG=(ai,j )=1,(i, j)∈E

0,i=j,

for the matrix, G. The eigencentralities of the nodes in the

network are found for the largest eigenvalue, λG, with

AGx=λGx,(4)

and the CE is the solution to Eqn. (4), numerically solved

via power iteration, or Von Mises iteration (von Mises and

Pollaczek-Geiringer (1929)).

Unweighted Egocentric Eigencentrality is the solution to

Eqn. (4) over Hi(u) rather than over G.

Weighted Sociocentric Eigencentrality uses the directed

network with edge weights as deﬁned by Eqn. (1). The

adjacency matrix becomes dynamic and temporally de-

pendant, such that for At

G∈Mn(Z+),

At

G=(ai,j )=(uiuj)t

q,(i, j)∈E

0,i=j. (5)

AGin Eqn. (4) is then replaced by At

Gin Eqn. (5).

Weighted Egocentric Eigencentrality uses At

Gfrom Eqn.

(5). For node ujit is over Hi(uj), not G, creating

Cwe

e(uj)t=(At

Hi(u)x)j=(λt

Hi(u)x)j.

These CEs will be used as proxies for criticality measures.

To compute the value of information as processed through

each measure, we now outline a validation method.

3.3 Validation Method

The measures above must be validated as correctly ap-

proximating dynamic criticality within the network. A

validation function must determine at any timestep the

similarity of our CE to a benchmark and its period of

relevance. Criticality measures the impact of failure, which

must be deﬁned, and for how long the eﬀects of some action

can be said to have been caused by a previous one. Analysis

is conducted post hoc, using data that is neither limited by

the imperceptibility of the future nor communication con-

straints. We take linear functions of the total queue sizes

of the whole network, using Φt=u∈Vφt

u. We also ﬁnd

a time range for which we have suﬃcient conﬁdence that

all network states are suﬃciently dependant on eachother.

Ideal Time Horizon This is a moving time window,

bisected by the present timestep, where the window’s start

suﬃciently inﬂuenced all timesteps up to the present, and

the present will suﬃciently inﬂuence all timesteps up to

the window’s end. With it, we can ﬁnd how far must a CE

look into the future to suﬃciently capture both the current

network state and its inﬂuence. We iterate over a ﬁxed

number, htest, of time horizon windows, hi, less than half

the simulation length, tmax, where hi=itmax/(2htest ),

and take moving averages over Φtfor each width hi, so

MAt

Φ;hi=t

t−iΦt/i t ≥i;

∅t < i,

and MAΦ;hiis the time series made up by MAt

Φ;hi. Then

for all tsuch that MAt

Φ;hiexists, we take the absolute

diﬀerence between MAt

Φ;hiand Φt, such that

MADt

i=|MAt

Φ;hi−Φt|t≥i;

∅t < i,

and get the sum of absolute diﬀerences, SADi=tMADt

i.

Normalised, this is NSADi= SADi/maxhtest

i=1 SADi. It-

erating through NSADiin ascending i, we obtain gi=

htest(NSADi+1 −NSADi). The ideal time horizon is where

the relative gain in error by a wider window is large enough

to suggest that all smaller window sizes cover regions

with signiﬁcant inﬂuence over eachother. Beyond that,

since error gain slows down, one cannot conﬁdently claim

events are the direct consequence of the current time. This

conﬁdence, the validation threshold, is an independent

parameter, c, with which we deﬁne the ideal time horizon,

hfor the ﬁrst iwhere one of the following conditions is

fulﬁlled, where if the last case is reached we must test

more windows or increase the conﬁdence threshold:

h=

i/2gi≤c;

(i−1)/2gi<0;

∅i=htest.

Comparison Accuracy Function We compare CEs to a

benchmark measure of criticality, deﬁned as the change in

network operation induced by any network state changes.

Dependencies are suﬃciently large for all timesteps at

most htimesteps far from eachother, so impacts occur

over a meaningful timescale of h. Impact at time tis

the change over htimesteps across tscaled by the built

up queues at time t, since a heavily used system has

more to lose than an underused one. We obtain a moving

average with window width h, MAΦ;hand produce a time

series of scaled diﬀerences across a time horizon, THDt=

MAt

Φ;hMAt+h

Φ;h−MAt−h+1

Φ;h. This is normalised to [0,1],

creating NTHDt= (THDt−mintTHDt)/(maxtTHDt−

mintTHDt), the criticality benchmark.ForaCE,C(u)t,

we calculate the network mean, Ct=uC(u)t, and nor-

malise to get NCt. Let T={τ∈T:τ=kµ, k ∈Z+}. The

error from the benchmark is Errt= NCt−NTHDt, and

the root mean squared error is RMSE = t∈T Err2

t/|T |.

The lowest RMSE gives the most accurate measure since

it most closely follows the benchmark criticality.

For any given CE, the value of information, Vfor a

given dataset, D, is the multivariate measure of the re-

ciprocal of the RMSE and the reciprocal of the compu-

tation time, Comp, so that it grows with reduced er-

ror and increased time eﬃciency, such that V(D; CE) =

((RMSE;D, CE)−1,(Comp; D, CE)−1).

4. RESULTS AND DISCUSSION

We compared simulation results of instant, constant, and

periodic CPs for the accuracy of decentralised, dynamic,

and information bounded centrality measures for estimat-

ing criticality. Three simulations, one for each CP, using

the real topology of the UK outer backbone infrastructure

network for a UK telecoms service provider (Fig. 1).

Fig. 1. Outer backbone UK infrastructure network for a

large UK service provider

Simulation inputs are in Table 1. We skip 5000 timesteps to

avoid degeneracy since we initialise on an empty network.

Table 1. Simulation Inputs

Runtime tmax (hundreths of a second) 100000 timesteps

Monitor Interval µ250 timesteps

Packet generation rate 19 packets/timestep

Processing delay 13 timesteps

Queue check time 1 timestep

Transmission time 30 timesteps

Node capacity φ128 packets

Link capacity 1024 packets

Information visibility boundary 2 hops

Validation threshold c0.1 relative diﬀerence

Window widths tested htest 20 windows

Ideal time horizon h10000 timesteps

Fig. 2 shows plots of all centralities and the criticality

benchmark, NTHDt, for each CP, ﬁltered using a ﬁrst

order Savitzky-Golay ﬁlter. This graph shows substantial

diﬀerence between outputs for weighted and unweighted

measures across all CPs, but further analysis will show

similar accuracy. The absolute error, |Errt|, from NTHDt

was calculated, the results plotted in Fig. 3. These plots are

only for weighted measures since error from static values

is a trivial transformation of the criticality benchmark.

We can see the relative accuracy of each curve, showing

similar accuracy between bounded and unbounded mea-

sures. Periodic CP readouts are more closely clustered in

terms of accuracy. Error plots are ﬁltered using a ﬁrst order

Savitzky-Golay ﬁlter.

412 Yaniv Proselkov et al. / IFAC PapersOnLine 55-2 (2022) 408–413

(b) Instant (c) Constant (d) Periodic

Fig. 2. CEs and NTHDts for simulations of each CP.

(b) Instant (c) Constant (d) Periodic

Fig. 3. Error for Weighted CEs and NTHDts for simula-

tions of each CP.

Table 2 shows RMSE from the criticality benchmark for

each CE and each CP. Weighted, dynamic CEs largely per-

formed much better than their static counterparts. Bound-

edness minimally impacted accuracy. Typically, CEs were

most accurate for constant CP, followed by periodic then

instant CPs. Of the weighted bounded CEs, betweenness

was best in periodic CP with large variation between CPs;

eigenvector best in constant CPs; and degree in instant CP,

and much better than periodic and constant, which show

wide diﬀerence. This suggests diﬀerent CEs can be used for

diﬀerent CPs. No weighted bounded CE had more RMSE

than 0.35. Betweenness was worst in constant CP with

RMSE 0.339, degree best in instant CP with RMSE 0.239.

Eigencentrality and betweenness had similar consistency

with ranges of 0.086 and 0.085 respectively. All values

are similar and low, suggesting combining bounding and

dynamicity creates accurate and scalable CEs.

Table 2. Root Mean Squared Error for each CP

and CE. Blue is the least error, red is the most.

Instant Constant Periodic

Betweenness 0.536 0.412 0.482

Bnd’d. Betweenness 0.491 0.368 0.437

Wtd. Betweenness 0.263 0.27 0.28

Wtd. Bnd’d. Betweenness 0.301 0.339 0.254

Eigenvector 0.44 0.321 0.388

Bnd’d. Eigenvector 0.381 0.268 0.332

Wtd. Eigenvector 0.228 0.212 0.244

Wtd. Bnd’d. Eigenvector 0.327 0.241 0.259

Degree 0.487 0.365 0.433

Wtd. Degree 0.239 0.344 0.273

Boundedness and the time horizon are spacial and tem-

poral eﬀorts to increase relevancy of a given calculation.

A suﬃciently small information boundary also reduces

computational complexity, allowing calculations to take

place within the relevant period. In application, the mon-

itor interval should be bounded above by the relevant

period, and is typically bounded below by the computation

time. This motivates analysing computation time of each

measure under each CP. All analyses were completed on

Google Colab Pro, a Jupyter notebook service that pro-

vides a Python 3 Google Compute Engine backend of an

adaptable memory of up to 32GB RAM with 2 virtual

CPUs, Intel(R) Xeon(R) @ 2.20GHz.

Fig. 4 shows computation time plots. Limiting information

most aﬀects weighted betweenness, which unbounded can

take over 0.07 seconds, but bounded may be less than

0.01, close to weighted bounded eigencentrality. Instant

and constant computation times are similar for all mea-

sures, though instant CP shows variability and intermit-

tent spikes during network congestion, where queues grow

due to build-up exceeding processing speed in certain

regions. Periodic CP was uniformly faster, which since it

places a lighter memory load through lower frequency, may

be an artefact of computational stress on the computer

during simulation. Dynamicity increases computation time

for complex measures, but minimally impacts degree cen-

trality, computed nearly instantly. Means for each measure

and CP are shown in Table 3.

(b) Instant (c) Constant (d) Periodic

Fig. 4. Computation time for all CEs for simulations of

each CP, measured in seconds.

Table 3. Mean computation time for all CEs for

simulations of each CP, measured in seconds.

Blue is the fastest, red is the slowest.

Instant Constant Periodic

Betweenness 0.03112 0.03022 0.0245

Bnd’d. Betweenness 0.00676 0.00669 0.00554

Wtd. Betweenness 0.06512 0.06231 0.05248

Wtd. Bnd’d. Betweenness 0.01188 0.01154 0.00951

Eigenvector 0.00738 0.00726 0.00612

Bnd’d. Eigenvector 0.00445 0.0043 0.00357

Wtd. Eigenvector 0.01902 0.01874 0.01523

Wtd. Bnd’d. Eigenvector 0.01084 0.01068 0.00869

Degree 1.30E-05 1.09E-05 5.19E-06

Wtd. Degree 1.83E-05 1.38E-05 6.88E-06

5. CONCLUSION

In this paper, we reviewed network criticality, communi-

cation paradigms, and existing IB-CMs. We introduced a

model of simulated decentralised online network measure-

ment under three CPs. We then deﬁned the IB-CMs we use

in our analysis (actually CEs). The validation method that

determines the error and from there, value of the informa-

tion was then outlined. Simulations for the various CEs

Yaniv Proselkov et al. / IFAC PapersOnLine 55-2 (2022) 408–413 413

(b) Instant (c) Constant (d) Periodic

Fig. 2. CEs and NTHDts for simulations of each CP.

(b) Instant (c) Constant (d) Periodic

Fig. 3. Error for Weighted CEs and NTHDts for simula-

tions of each CP.

Table 2 shows RMSE from the criticality benchmark for

each CE and each CP. Weighted, dynamic CEs largely per-

formed much better than their static counterparts. Bound-

edness minimally impacted accuracy. Typically, CEs were

most accurate for constant CP, followed by periodic then

instant CPs. Of the weighted bounded CEs, betweenness

was best in periodic CP with large variation between CPs;

eigenvector best in constant CPs; and degree in instant CP,

and much better than periodic and constant, which show

wide diﬀerence. This suggests diﬀerent CEs can be used for

diﬀerent CPs. No weighted bounded CE had more RMSE

than 0.35. Betweenness was worst in constant CP with

RMSE 0.339, degree best in instant CP with RMSE 0.239.

Eigencentrality and betweenness had similar consistency

with ranges of 0.086 and 0.085 respectively. All values

are similar and low, suggesting combining bounding and

dynamicity creates accurate and scalable CEs.

Table 2. Root Mean Squared Error for each CP

and CE. Blue is the least error, red is the most.

Instant Constant Periodic

Betweenness 0.536 0.412 0.482

Bnd’d. Betweenness 0.491 0.368 0.437

Wtd. Betweenness 0.263 0.27 0.28

Wtd. Bnd’d. Betweenness 0.301 0.339 0.254

Eigenvector 0.44 0.321 0.388

Bnd’d. Eigenvector 0.381 0.268 0.332

Wtd. Eigenvector 0.228 0.212 0.244

Wtd. Bnd’d. Eigenvector 0.327 0.241 0.259

Degree 0.487 0.365 0.433

Wtd. Degree 0.239 0.344 0.273

Boundedness and the time horizon are spacial and tem-

poral eﬀorts to increase relevancy of a given calculation.

A suﬃciently small information boundary also reduces

computational complexity, allowing calculations to take

place within the relevant period. In application, the mon-

itor interval should be bounded above by the relevant

period, and is typically bounded below by the computation

time. This motivates analysing computation time of each

measure under each CP. All analyses were completed on

Google Colab Pro, a Jupyter notebook service that pro-

vides a Python 3 Google Compute Engine backend of an

adaptable memory of up to 32GB RAM with 2 virtual

CPUs, Intel(R) Xeon(R) @ 2.20GHz.

Fig. 4 shows computation time plots. Limiting information

most aﬀects weighted betweenness, which unbounded can

take over 0.07 seconds, but bounded may be less than

0.01, close to weighted bounded eigencentrality. Instant

and constant computation times are similar for all mea-

sures, though instant CP shows variability and intermit-

tent spikes during network congestion, where queues grow

due to build-up exceeding processing speed in certain

regions. Periodic CP was uniformly faster, which since it

places a lighter memory load through lower frequency, may

be an artefact of computational stress on the computer

during simulation. Dynamicity increases computation time

for complex measures, but minimally impacts degree cen-

trality, computed nearly instantly. Means for each measure

and CP are shown in Table 3.

(b) Instant (c) Constant (d) Periodic

Fig. 4. Computation time for all CEs for simulations of

each CP, measured in seconds.

Table 3. Mean computation time for all CEs for

simulations of each CP, measured in seconds.

Blue is the fastest, red is the slowest.

Instant Constant Periodic

Betweenness 0.03112 0.03022 0.0245

Bnd’d. Betweenness 0.00676 0.00669 0.00554

Wtd. Betweenness 0.06512 0.06231 0.05248

Wtd. Bnd’d. Betweenness 0.01188 0.01154 0.00951

Eigenvector 0.00738 0.00726 0.00612

Bnd’d. Eigenvector 0.00445 0.0043 0.00357

Wtd. Eigenvector 0.01902 0.01874 0.01523

Wtd. Bnd’d. Eigenvector 0.01084 0.01068 0.00869

Degree 1.30E-05 1.09E-05 5.19E-06

Wtd. Degree 1.83E-05 1.38E-05 6.88E-06

5. CONCLUSION

In this paper, we reviewed network criticality, communi-

cation paradigms, and existing IB-CMs. We introduced a

model of simulated decentralised online network measure-

ment under three CPs. We then deﬁned the IB-CMs we use

in our analysis (actually CEs). The validation method that

determines the error and from there, value of the informa-

tion was then outlined. Simulations for the various CEs

under diﬀerent CPs and their results were then detailed,

showing the viability of information bounded and dynamic

criticality estimation. Together this research provides a

framework to develop more advanced CEs. Bounding infor-

mation visibility is a viable method for scalable measures

that preserves accuracy while speeding up calculation.

Each CE was shown to have output occupying the same

approximate region between CPs, so behaving similarly.

This holds true for error curves too, as in Figs. 2 and 3. In

fact, error shows to be more constant for the periodic CP,

suggesting an advantage in terms of control for that CP.

Computation time of was found to have similar ordering

among CEs between CPs, decreasing in variance and mag-

nitude from instant to constant to periodic communica-

tion. This may be an artefact of simulating decentralised

agents with a central computer. Alternatively, this may

support the argument for decentralised computation, since

less information frequency improves computation speed in

singular agents. Adding interpolation or statistical data

generation may then give ﬁne detailed, accurate measures

with low data packet load and high responsiveness.

This study simulated normally operating networks. Fur-

ther research will simulate critical node failure. We expect

this will introduce variability in computation times, which

may aﬀect monitoring interval selection. Future work will

use also use the advanced CEs developed in (Proselkov

et al. (2020)), as well as produce case speciﬁc, data derived

CEs for maximum relevancy. Using these for network con-

trol functions will then allow us to learn the value of infor-

mation function by removing measure dependency. This

will be achieved through the validation method deﬁned

in this paper measured against a performance metric. We

predict this will give useful ﬁndings in studies of network

homophily, and tools for policy makers when constructing

or designing networks with communication.

Beyond the telecom case, this analytic framework will

be applicable to other systems with dynamic ﬂow and

independent cognitive agents, such as business networks,

mail networks, river networks, and more, each a critical

support network for any manufacturing system.

REFERENCES

Birnbaum, Z.W. (1968). On the Importance of Diﬀerent

Components in a Multicomponent System. Technical

report, Washington University Seattle Lab of Statistical

Research, Seattle.

Cetinkaya, E.K. and Sterbenz, J.P.G. (2013). A Taxonomy

of Network Challenges. In Design of Reliable Commu-

nication Networks. IEEE.

Chen, D., L¨u, L., Shang, M.S., Zhang, Y.C., and Zhou,

T. (2012). Identifying inﬂuential nodes in complex

networks. Physica A: Statistical Mechanics and its

Applications, 391(4), 1777–1787.

Dinh, T.N., Xuan, Y., Thai, M.T., Park, E.K., and Znati,

T. (2010). On Approximation of New Optimization

Methods for Assessing Network Vulnerability. In IEEE

INFOCOM 2010 - IEEE Conference on Computer Com-

munications, 1–9. IEEE.

Ercsey-Ravasz, M. and Toroczkai, Z. (2010). Centrality

scaling in large networks. Physical Review Letters,

105(3).

Fang, Y. and Zio, E. (2013). Hierarchical Modeling by

Recursive Unsupervised Spectral Clustering and Net-

work Extended Importance Measures to Analyze the Re-

liability Characteristics of Complex Network Systems.

American Journal of Operations Research, 03(01), 101–

112.

Freeman, L.C. (1977). A Set of Measures of Centrality

Based on Betweenness. Sociometry, 40(1), 35.

Herrera, M., Perez-Hernandez, M., Kumar Jain, A., and

Kumar Parlikad, A. (2020). Critical link analysis of a

national Internet backbone via dynamic perturbation.

In Advanced Maintenance Engineering, Services and

Technologies.

Kazil, J., Masad, D., and Crooks, A. (2020). Utilizing

Python for Agent-Based Modeling: The Mesa Frame-

work, volume 12268 LNCS. Springer International Pub-

lishing.

Kermarrec, A.M., Le Merrer, E., Sericola, B., and Tr´edan,

G. (2011). Second order centrality: Distributed assess-

ment of nodes criticity in complex networks. Computer

Communications, 34(5), 619–628.

Lehmann, K.A. and Kaufmann, M. (2003). Decentralized

algorithms for evaluating centrality in complex net-

works. Networks, (January 2003), 1–9.

Likic, V. and Shaﬁ, K. (2018). Battlespace Mobile/Ad Hoc

Communication Networks: Performance, Vulnerability

and Resilience. 303–314.

L¨u, L., Chen, D., Ren, X.L., Zhang, Q.M., Zhang, Y.C.,

and Zhou, T. (2016). Vital nodes identiﬁcation in

complex networks. Physics Reports, 650, 1–63.

Marsden, P.V. (2002). Egocentric and sociocentric mea-

sures of network centrality. Social Networks, 24(4), 407–

422.

Moura, J. and Hutchison, D. (2019). Cyber-Physical

Systems Resilience: State of the Art, Research Issues

and Future Trends. 42.

Nanda, S. and Kotz, D. (2008). Localized Bridging Cen-

trality for Distributed Network Analysis. In 2008 Pro-

ceedings of 17th International Conference on Computer

Communications and Networks, 1–6. IEEE.

Neumann, P. (1995). Fatal Defect: Chasing Killer Com-

puter Bugs, volume 20. Times Books, ﬁrst edit edition.

Proselkov, Y., Herrera, M., Parlikad, A.K., and Brintrup,

A. (2020). Distributed Dynamic Measures of Criticality

for Telecommunication Networks. In Service Oriented,

Holonic and Multi-agent Manufacturing Systems for

Industry of the Future, 1–12. Springer.

Salazar, J.C., Nejjari, F., Sarrate, R., Weber, P., and

Theilliol, D. (2016). Reliability Importance Measures for

Availability Enhancement in Drinking Water Networks.

Technical report.

Veres, A. and Boda, M. (2005). Complex Dynamics in

Communication Networks. Springer: Complexity.

von Mises, R. and Pollaczek-Geiringer, H. (1929). Prak-

tische Verfahren der Gleichungsauﬂ¨osung. Zamm-

zeitschrift Fur Angewandte Mathematik Und Mechanik,

9, 152–164.

Wehmuth, K. and Ziviani, A. (2011). Distributed location

of the critical nodes to network robustness based on

spectral analysis. In LANOMS 2011, 1–8. IEEE.