ArticlePDF Available

Abstract and Figures

Smart manufacturing uses advanced data-driven solutions to improve performance and operations resilience requiring large amounts of data delivered quickly, enabled by telecom networks and network elements such as routers or switches. Disruptions can render a network inoperable; avoiding them requires advanced responsiveness to network usage, achievable by embedding autonomy into the network, providing fast and scalable algorithms that use key metrics to manage disruptions, such as impact of failure in a network element on system functions. Centralised approaches are insufficient for this as they need time to transmit data to the controller, by which time it may have become irrelevant. Decentralised and information bounded measures solve this by placing computational agents near the data source. We propose an agent-based model to assess the value of the information for calculating decentralised criticality metrics, assigning a data collection agent to each network element, computing relevant indicators of the impact of failure in a decentralised way. This is evaluated by simulating discrete information exchange with concurrent data analysis, comparing measure accuracy to a benchmark, and with measure computation time as a proxy for computation complexity. Results show losses in accuracy are offset by faster computations with fewer network dependencies.
Content may be subject to copyright.
IFAC PapersOnLine 55-2 (2022) 408–413
ScienceDirect
Available online at www.sciencedirect.com
2405-8963 Copyright © 2022 The Authors. This is an open access article under the CC BY-NC-ND license
.
Peer review under responsibility of International Federation of Automatic Control.
10.1016/j.ifacol.2022.04.228
10.1016/j.ifacol.2022.04.228 2405-8963
Copyright ©
2022 The Authors. This is an open access article under the CC BY-NC-ND license
(
https://creativecommons.org/licenses/by-nc-nd/4.0/
)
The value of information for dynamic
decentralised criticality computation
Yaniv ProselkovManuel Herrera∗∗
Marco Perez Hernandez∗∗∗ Ajith Kumar Parlikad∗∗∗∗
Alexandra Brintrup
Institute for Manufacturing, Dept. of Engineering, University of
Cambridge
yp289@cam.ac.uk; ∗∗amh226@cam.ac.uk; ∗∗∗mep53@cam.ac.uk;
∗∗∗∗aknp2@cam.ac.uk; ab702@cam.ac.uk
Abstract: Smart manufacturing uses advanced data-driven solutions to improve performance
and operations resilience requiring large amounts of data delivered quickly, enabled by telecom
networks and network elements such as routers or switches. Disruptions can render a network
inoperable; avoiding them requires advanced responsiveness to network usage, achievable by
embedding autonomy into the network, providing fast and scalable algorithms that use key
metrics to manage disruptions, such as impact of failure in a network element on system
functions. Centralised approaches are insufficient for this as they need time to transmit data
to the controller, by which time it may have become irrelevant. Decentralised and information
bounded measures solve this by placing computational agents near the data source. We propose
an agent-based model to assess the value of the information for calculating decentralised
criticality metrics, assigning a data collection agent to each network element, computing relevant
indicators of the impact of failure in a decentralised way. This is evaluated by simulating
discrete information exchange with concurrent data analysis, comparing measure accuracy to a
benchmark, and with measure computation time as a proxy for computation complexity. Results
show losses in accuracy are offset by faster computations with fewer network dependencies.
Keywords: Computational Science; Discrete-event Simulation; Dynamic Systems; Intelligent
Diagnostic Methodologies; Large Scale Multi-agent Systems; Multi-agent Simulation;
Visibility; Criticality
1. INTRODUCTION
Manufacturing processes have become more data-driven
and dependent on interconnection of multiple facilities for
efficient decision-making, thus telecom infrastructure is as
pervasive a component of manufacturing industries as the
powergrid and other critical infrastructures. Telecom in-
frastructures are physical networks that support internet,
telephony, and other digital services by facilitating data
transfers between users. Infrastructures are represented by
graphs, with network elements, such as routers or switches,
as nodes and connections as edges. In such flow networks,
data packet congestion at a node may cause node failure.
How to monitor the impact of disruptions such as conges-
tion on the network - criticality - is important for network
behaviour control, which must be accurate and fast in net-
works that are functioning near capacity, as expected for
future backbone networks (Moura and Hutchison (2019)).
In centralised network monitoring and criticality computa-
tion, the central computational resources need information
This research was supported by the EPSRC and BT Prosperity
Partnership project: Next Generation Converged Digital Infrastruc-
ture, grant number EP/R004935/1, and the UK Engineering and
Physical Sciences Research Council (EPSRC) Doctoral Training
Partnership Award for the University of Cambridge, grant number
EP/R513180/1.
on the whole network, creating a criticality measure (CM)
(Salazar et al. (2016); Fang and Zio (2013)), requiring live
and dynamic node topology and attribute data to respond
to behavioural shifts. The increased data cause longer
computational times, so conclusions arrived at a given
point in time lose relevance, and promote critical events if
too much data is transferred. The amount of data used for
a CM should thus be minimised while preserving meaning,
achievable by imposing a limited bound around a given
node, and computing criticality in a decentralised manner.
This shortens data paths and reduces complexity by taking
information from a small region around a given node. We
call these information bounded CMs (IB-CMs). This IB-
CM can be approximated with a centrality measure used
as a criticality estimate (CE), as both define importance
within a multicomponent system (Birnbaum (1968)). The
more accurate and more efficiently computed an IB-CM is,
the more valuable the information used in its calculation
is, with the value of information is a multivariate measure
composed of the accuracy and efficiency of a given IB-CM
under some information provision.
This paper builds on Proselkov et al. (2020) to outline
a method to assess the accuracy and computational effi-
ciency of IB-CMs that change with time, with respect to
a novel benchmark estimate of dynamic criticality under
The value of information for dynamic
decentralised criticality computation
Yaniv ProselkovManuel Herrera∗∗
Marco Perez Hernandez∗∗∗ Ajith Kumar Parlikad∗∗∗∗
Alexandra Brintrup
Institute for Manufacturing, Dept. of Engineering, University of
Cambridge
yp289@cam.ac.uk; ∗∗amh226@cam.ac.uk; ∗∗∗mep53@cam.ac.uk;
∗∗∗∗aknp2@cam.ac.uk; ab702@cam.ac.uk
Abstract: Smart manufacturing uses advanced data-driven solutions to improve performance
and operations resilience requiring large amounts of data delivered quickly, enabled by telecom
networks and network elements such as routers or switches. Disruptions can render a network
inoperable; avoiding them requires advanced responsiveness to network usage, achievable by
embedding autonomy into the network, providing fast and scalable algorithms that use key
metrics to manage disruptions, such as impact of failure in a network element on system
functions. Centralised approaches are insufficient for this as they need time to transmit data
to the controller, by which time it may have become irrelevant. Decentralised and information
bounded measures solve this by placing computational agents near the data source. We propose
an agent-based model to assess the value of the information for calculating decentralised
criticality metrics, assigning a data collection agent to each network element, computing relevant
indicators of the impact of failure in a decentralised way. This is evaluated by simulating
discrete information exchange with concurrent data analysis, comparing measure accuracy to a
benchmark, and with measure computation time as a proxy for computation complexity. Results
show losses in accuracy are offset by faster computations with fewer network dependencies.
Keywords: Computational Science; Discrete-event Simulation; Dynamic Systems; Intelligent
Diagnostic Methodologies; Large Scale Multi-agent Systems; Multi-agent Simulation;
Visibility; Criticality
1. INTRODUCTION
Manufacturing processes have become more data-driven
and dependent on interconnection of multiple facilities for
efficient decision-making, thus telecom infrastructure is as
pervasive a component of manufacturing industries as the
powergrid and other critical infrastructures. Telecom in-
frastructures are physical networks that support internet,
telephony, and other digital services by facilitating data
transfers between users. Infrastructures are represented by
graphs, with network elements, such as routers or switches,
as nodes and connections as edges. In such flow networks,
data packet congestion at a node may cause node failure.
How to monitor the impact of disruptions such as conges-
tion on the network - criticality - is important for network
behaviour control, which must be accurate and fast in net-
works that are functioning near capacity, as expected for
future backbone networks (Moura and Hutchison (2019)).
In centralised network monitoring and criticality computa-
tion, the central computational resources need information
This research was supported by the EPSRC and BT Prosperity
Partnership project: Next Generation Converged Digital Infrastruc-
ture, grant number EP/R004935/1, and the UK Engineering and
Physical Sciences Research Council (EPSRC) Doctoral Training
Partnership Award for the University of Cambridge, grant number
EP/R513180/1.
on the whole network, creating a criticality measure (CM)
(Salazar et al. (2016); Fang and Zio (2013)), requiring live
and dynamic node topology and attribute data to respond
to behavioural shifts. The increased data cause longer
computational times, so conclusions arrived at a given
point in time lose relevance, and promote critical events if
too much data is transferred. The amount of data used for
a CM should thus be minimised while preserving meaning,
achievable by imposing a limited bound around a given
node, and computing criticality in a decentralised manner.
This shortens data paths and reduces complexity by taking
information from a small region around a given node. We
call these information bounded CMs (IB-CMs). This IB-
CM can be approximated with a centrality measure used
as a criticality estimate (CE), as both define importance
within a multicomponent system (Birnbaum (1968)). The
more accurate and more efficiently computed an IB-CM is,
the more valuable the information used in its calculation
is, with the value of information is a multivariate measure
composed of the accuracy and efficiency of a given IB-CM
under some information provision.
This paper builds on Proselkov et al. (2020) to outline
a method to assess the accuracy and computational effi-
ciency of IB-CMs that change with time, with respect to
a novel benchmark estimate of dynamic criticality under
The value of information for dynamic
decentralised criticality computation
Yaniv ProselkovManuel Herrera∗∗
Marco Perez Hernandez∗∗∗ Ajith Kumar Parlikad∗∗∗∗
Alexandra Brintrup
Institute for Manufacturing, Dept. of Engineering, University of
Cambridge
yp289@cam.ac.uk; ∗∗amh226@cam.ac.uk; ∗∗∗mep53@cam.ac.uk;
∗∗∗∗aknp2@cam.ac.uk; ab702@cam.ac.uk
Abstract: Smart manufacturing uses advanced data-driven solutions to improve performance
and operations resilience requiring large amounts of data delivered quickly, enabled by telecom
networks and network elements such as routers or switches. Disruptions can render a network
inoperable; avoiding them requires advanced responsiveness to network usage, achievable by
embedding autonomy into the network, providing fast and scalable algorithms that use key
metrics to manage disruptions, such as impact of failure in a network element on system
functions. Centralised approaches are insufficient for this as they need time to transmit data
to the controller, by which time it may have become irrelevant. Decentralised and information
bounded measures solve this by placing computational agents near the data source. We propose
an agent-based model to assess the value of the information for calculating decentralised
criticality metrics, assigning a data collection agent to each network element, computing relevant
indicators of the impact of failure in a decentralised way. This is evaluated by simulating
discrete information exchange with concurrent data analysis, comparing measure accuracy to a
benchmark, and with measure computation time as a proxy for computation complexity. Results
show losses in accuracy are offset by faster computations with fewer network dependencies.
Keywords: Computational Science; Discrete-event Simulation; Dynamic Systems; Intelligent
Diagnostic Methodologies; Large Scale Multi-agent Systems; Multi-agent Simulation;
Visibility; Criticality
1. INTRODUCTION
Manufacturing processes have become more data-driven
and dependent on interconnection of multiple facilities for
efficient decision-making, thus telecom infrastructure is as
pervasive a component of manufacturing industries as the
powergrid and other critical infrastructures. Telecom in-
frastructures are physical networks that support internet,
telephony, and other digital services by facilitating data
transfers between users. Infrastructures are represented by
graphs, with network elements, such as routers or switches,
as nodes and connections as edges. In such flow networks,
data packet congestion at a node may cause node failure.
How to monitor the impact of disruptions such as conges-
tion on the network - criticality - is important for network
behaviour control, which must be accurate and fast in net-
works that are functioning near capacity, as expected for
future backbone networks (Moura and Hutchison (2019)).
In centralised network monitoring and criticality computa-
tion, the central computational resources need information
This research was supported by the EPSRC and BT Prosperity
Partnership project: Next Generation Converged Digital Infrastruc-
ture, grant number EP/R004935/1, and the UK Engineering and
Physical Sciences Research Council (EPSRC) Doctoral Training
Partnership Award for the University of Cambridge, grant number
EP/R513180/1.
on the whole network, creating a criticality measure (CM)
(Salazar et al. (2016); Fang and Zio (2013)), requiring live
and dynamic node topology and attribute data to respond
to behavioural shifts. The increased data cause longer
computational times, so conclusions arrived at a given
point in time lose relevance, and promote critical events if
too much data is transferred. The amount of data used for
a CM should thus be minimised while preserving meaning,
achievable by imposing a limited bound around a given
node, and computing criticality in a decentralised manner.
This shortens data paths and reduces complexity by taking
information from a small region around a given node. We
call these information bounded CMs (IB-CMs). This IB-
CM can be approximated with a centrality measure used
as a criticality estimate (CE), as both define importance
within a multicomponent system (Birnbaum (1968)). The
more accurate and more efficiently computed an IB-CM is,
the more valuable the information used in its calculation
is, with the value of information is a multivariate measure
composed of the accuracy and efficiency of a given IB-CM
under some information provision.
This paper builds on Proselkov et al. (2020) to outline
a method to assess the accuracy and computational effi-
ciency of IB-CMs that change with time, with respect to
a novel benchmark estimate of dynamic criticality under
The value of information for dynamic
decentralised criticality computation
Yaniv ProselkovManuel Herrera∗∗
Marco Perez Hernandez∗∗∗ Ajith Kumar Parlikad∗∗∗∗
Alexandra Brintrup
Institute for Manufacturing, Dept. of Engineering, University of
Cambridge
yp289@cam.ac.uk; ∗∗amh226@cam.ac.uk; ∗∗∗mep53@cam.ac.uk;
∗∗∗∗
aknp2@cam.ac.uk;
ab702@cam.ac.uk
Abstract: Smart manufacturing uses advanced data-driven solutions to improve performance
and operations resilience requiring large amounts of data delivered quickly, enabled by telecom
networks and network elements such as routers or switches. Disruptions can render a network
inoperable; avoiding them requires advanced responsiveness to network usage, achievable by
embedding autonomy into the network, providing fast and scalable algorithms that use key
metrics to manage disruptions, such as impact of failure in a network element on system
functions. Centralised approaches are insufficient for this as they need time to transmit data
to the controller, by which time it may have become irrelevant. Decentralised and information
bounded measures solve this by placing computational agents near the data source. We propose
an agent-based model to assess the value of the information for calculating decentralised
criticality metrics, assigning a data collection agent to each network element, computing relevant
indicators of the impact of failure in a decentralised way. This is evaluated by simulating
discrete information exchange with concurrent data analysis, comparing measure accuracy to a
benchmark, and with measure computation time as a proxy for computation complexity. Results
show losses in accuracy are offset by faster computations with fewer network dependencies.
Keywords: Computational Science; Discrete-event Simulation; Dynamic Systems; Intelligent
Diagnostic Methodologies; Large Scale Multi-agent Systems; Multi-agent Simulation;
Visibility; Criticality
1. INTRODUCTION
Manufacturing processes have become more data-driven
and dependent on interconnection of multiple facilities for
efficient decision-making, thus telecom infrastructure is as
pervasive a component of manufacturing industries as the
powergrid and other critical infrastructures. Telecom in-
frastructures are physical networks that support internet,
telephony, and other digital services by facilitating data
transfers between users. Infrastructures are represented by
graphs, with network elements, such as routers or switches,
as nodes and connections as edges. In such flow networks,
data packet congestion at a node may cause node failure.
How to monitor the impact of disruptions such as conges-
tion on the network - criticality - is important for network
behaviour control, which must be accurate and fast in net-
works that are functioning near capacity, as expected for
future backbone networks (Moura and Hutchison (2019)).
In centralised network monitoring and criticality computa-
tion, the central computational resources need information
This research was supported by the EPSRC and BT Prosperity
Partnership project: Next Generation Converged Digital Infrastruc-
ture, grant number EP/R004935/1, and the UK Engineering and
Physical Sciences Research Council (EPSRC) Doctoral Training
Partnership Award for the University of Cambridge, grant number
EP/R513180/1.
on the whole network, creating a criticality measure (CM)
(Salazar et al. (2016); Fang and Zio (2013)), requiring live
and dynamic node topology and attribute data to respond
to behavioural shifts. The increased data cause longer
computational times, so conclusions arrived at a given
point in time lose relevance, and promote critical events if
too much data is transferred. The amount of data used for
a CM should thus be minimised while preserving meaning,
achievable by imposing a limited bound around a given
node, and computing criticality in a decentralised manner.
This shortens data paths and reduces complexity by taking
information from a small region around a given node. We
call these information bounded CMs (IB-CMs). This IB-
CM can be approximated with a centrality measure used
as a criticality estimate (CE), as both define importance
within a multicomponent system (Birnbaum (1968)). The
more accurate and more efficiently computed an IB-CM is,
the more valuable the information used in its calculation
is, with the value of information is a multivariate measure
composed of the accuracy and efficiency of a given IB-CM
under some information provision.
This paper builds on Proselkov et al. (2020) to outline
a method to assess the accuracy and computational effi-
ciency of IB-CMs that change with time, with respect to
a novel benchmark estimate of dynamic criticality under
The value of information for dynamic
decentralised criticality computation
Yaniv Proselkov
Manuel Herrera
∗∗
Marco Perez Hernandez∗∗∗ Ajith Kumar Parlikad∗∗∗∗
Alexandra Brintrup
Institute for Manufacturing, Dept. of Engineering, University of
Cambridge
yp289@cam.ac.uk; ∗∗amh226@cam.ac.uk; ∗∗∗mep53@cam.ac.uk;
∗∗∗∗aknp2@cam.ac.uk; ab702@cam.ac.uk
Abstract: Smart manufacturing uses advanced data-driven solutions to improve performance
and operations resilience requiring large amounts of data delivered quickly, enabled by telecom
networks and network elements such as routers or switches. Disruptions can render a network
inoperable; avoiding them requires advanced responsiveness to network usage, achievable by
embedding autonomy into the network, providing fast and scalable algorithms that use key
metrics to manage disruptions, such as impact of failure in a network element on system
functions. Centralised approaches are insufficient for this as they need time to transmit data
to the controller, by which time it may have become irrelevant. Decentralised and information
bounded measures solve this by placing computational agents near the data source. We propose
an agent-based model to assess the value of the information for calculating decentralised
criticality metrics, assigning a data collection agent to each network element, computing relevant
indicators of the impact of failure in a decentralised way. This is evaluated by simulating
discrete information exchange with concurrent data analysis, comparing measure accuracy to a
benchmark, and with measure computation time as a proxy for computation complexity. Results
show losses in accuracy are offset by faster computations with fewer network dependencies.
Keywords: Computational Science; Discrete-event Simulation; Dynamic Systems; Intelligent
Diagnostic Methodologies; Large Scale Multi-agent Systems; Multi-agent Simulation;
Visibility; Criticality
1. INTRODUCTION
Manufacturing processes have become more data-driven
and dependent on interconnection of multiple facilities for
efficient decision-making, thus telecom infrastructure is as
pervasive a component of manufacturing industries as the
powergrid and other critical infrastructures. Telecom in-
frastructures are physical networks that support internet,
telephony, and other digital services by facilitating data
transfers between users. Infrastructures are represented by
graphs, with network elements, such as routers or switches,
as nodes and connections as edges. In such flow networks,
data packet congestion at a node may cause node failure.
How to monitor the impact of disruptions such as conges-
tion on the network - criticality - is important for network
behaviour control, which must be accurate and fast in net-
works that are functioning near capacity, as expected for
future backbone networks (Moura and Hutchison (2019)).
In centralised network monitoring and criticality computa-
tion, the central computational resources need information
This research was supported by the EPSRC and BT Prosperity
Partnership project: Next Generation Converged Digital Infrastruc-
ture, grant number EP/R004935/1, and the UK Engineering and
Physical Sciences Research Council (EPSRC) Doctoral Training
Partnership Award for the University of Cambridge, grant number
EP/R513180/1.
on the whole network, creating a criticality measure (CM)
(Salazar et al. (2016); Fang and Zio (2013)), requiring live
and dynamic node topology and attribute data to respond
to behavioural shifts. The increased data cause longer
computational times, so conclusions arrived at a given
point in time lose relevance, and promote critical events if
too much data is transferred. The amount of data used for
a CM should thus be minimised while preserving meaning,
achievable by imposing a limited bound around a given
node, and computing criticality in a decentralised manner.
This shortens data paths and reduces complexity by taking
information from a small region around a given node. We
call these information bounded CMs (IB-CMs). This IB-
CM can be approximated with a centrality measure used
as a criticality estimate (CE), as both define importance
within a multicomponent system (Birnbaum (1968)). The
more accurate and more efficiently computed an IB-CM is,
the more valuable the information used in its calculation
is, with the value of information is a multivariate measure
composed of the accuracy and efficiency of a given IB-CM
under some information provision.
This paper builds on Proselkov et al. (2020) to outline
a method to assess the accuracy and computational effi-
ciency of IB-CMs that change with time, with respect to
a novel benchmark estimate of dynamic criticality under
The value of information for dynamic
decentralised criticality computation
Yaniv ProselkovManuel Herrera∗∗
Marco Perez Hernandez∗∗∗ Ajith Kumar Parlikad∗∗∗∗
Alexandra Brintrup
Institute for Manufacturing, Dept. of Engineering, University of
Cambridge
yp289@cam.ac.uk; ∗∗amh226@cam.ac.uk; ∗∗∗mep53@cam.ac.uk;
∗∗∗∗aknp2@cam.ac.uk; ab702@cam.ac.uk
Abstract: Smart manufacturing uses advanced data-driven solutions to improve performance
and operations resilience requiring large amounts of data delivered quickly, enabled by telecom
networks and network elements such as routers or switches. Disruptions can render a network
inoperable; avoiding them requires advanced responsiveness to network usage, achievable by
embedding autonomy into the network, providing fast and scalable algorithms that use key
metrics to manage disruptions, such as impact of failure in a network element on system
functions. Centralised approaches are insufficient for this as they need time to transmit data
to the controller, by which time it may have become irrelevant. Decentralised and information
bounded measures solve this by placing computational agents near the data source. We propose
an agent-based model to assess the value of the information for calculating decentralised
criticality metrics, assigning a data collection agent to each network element, computing relevant
indicators of the impact of failure in a decentralised way. This is evaluated by simulating
discrete information exchange with concurrent data analysis, comparing measure accuracy to a
benchmark, and with measure computation time as a proxy for computation complexity. Results
show losses in accuracy are offset by faster computations with fewer network dependencies.
Keywords: Computational Science; Discrete-event Simulation; Dynamic Systems; Intelligent
Diagnostic Methodologies; Large Scale Multi-agent Systems; Multi-agent Simulation;
Visibility; Criticality
1. INTRODUCTION
Manufacturing processes have become more data-driven
and dependent on interconnection of multiple facilities for
efficient decision-making, thus telecom infrastructure is as
pervasive a component of manufacturing industries as the
powergrid and other critical infrastructures. Telecom in-
frastructures are physical networks that support internet,
telephony, and other digital services by facilitating data
transfers between users. Infrastructures are represented by
graphs, with network elements, such as routers or switches,
as nodes and connections as edges. In such flow networks,
data packet congestion at a node may cause node failure.
How to monitor the impact of disruptions such as conges-
tion on the network - criticality - is important for network
behaviour control, which must be accurate and fast in net-
works that are functioning near capacity, as expected for
future backbone networks (Moura and Hutchison (2019)).
In centralised network monitoring and criticality computa-
tion, the central computational resources need information
This research was supported by the EPSRC and BT Prosperity
Partnership project: Next Generation Converged Digital Infrastruc-
ture, grant number EP/R004935/1, and the UK Engineering and
Physical Sciences Research Council (EPSRC) Doctoral Training
Partnership Award for the University of Cambridge, grant number
EP/R513180/1.
on the whole network, creating a criticality measure (CM)
(Salazar et al. (2016); Fang and Zio (2013)), requiring live
and dynamic node topology and attribute data to respond
to behavioural shifts. The increased data cause longer
computational times, so conclusions arrived at a given
point in time lose relevance, and promote critical events if
too much data is transferred. The amount of data used for
a CM should thus be minimised while preserving meaning,
achievable by imposing a limited bound around a given
node, and computing criticality in a decentralised manner.
This shortens data paths and reduces complexity by taking
information from a small region around a given node. We
call these information bounded CMs (IB-CMs). This IB-
CM can be approximated with a centrality measure used
as a criticality estimate (CE), as both define importance
within a multicomponent system (Birnbaum (1968)). The
more accurate and more efficiently computed an IB-CM is,
the more valuable the information used in its calculation
is, with the value of information is a multivariate measure
composed of the accuracy and efficiency of a given IB-CM
under some information provision.
This paper builds on Proselkov et al. (2020) to outline
a method to assess the accuracy and computational effi-
ciency of IB-CMs that change with time, with respect to
a novel benchmark estimate of dynamic criticality under
Yaniv Proselkov et al. / IFAC PapersOnLine 55-2 (2022) 408–413 409
Copyright ©
2022 The Authors. This is an open access article under the CC BY-NC-ND license
(
https://creativecommons.org/licenses/by-nc-nd/4.0/
)
The value of information for dynamic
decentralised criticality computation
Yaniv ProselkovManuel Herrera∗∗
Marco Perez Hernandez∗∗∗ Ajith Kumar Parlikad∗∗∗∗
Alexandra Brintrup
Institute for Manufacturing, Dept. of Engineering, University of
Cambridge
yp289@cam.ac.uk; ∗∗amh226@cam.ac.uk; ∗∗∗mep53@cam.ac.uk;
∗∗∗∗aknp2@cam.ac.uk; ab702@cam.ac.uk
Abstract: Smart manufacturing uses advanced data-driven solutions to improve performance
and operations resilience requiring large amounts of data delivered quickly, enabled by telecom
networks and network elements such as routers or switches. Disruptions can render a network
inoperable; avoiding them requires advanced responsiveness to network usage, achievable by
embedding autonomy into the network, providing fast and scalable algorithms that use key
metrics to manage disruptions, such as impact of failure in a network element on system
functions. Centralised approaches are insufficient for this as they need time to transmit data
to the controller, by which time it may have become irrelevant. Decentralised and information
bounded measures solve this by placing computational agents near the data source. We propose
an agent-based model to assess the value of the information for calculating decentralised
criticality metrics, assigning a data collection agent to each network element, computing relevant
indicators of the impact of failure in a decentralised way. This is evaluated by simulating
discrete information exchange with concurrent data analysis, comparing measure accuracy to a
benchmark, and with measure computation time as a proxy for computation complexity. Results
show losses in accuracy are offset by faster computations with fewer network dependencies.
Keywords: Computational Science; Discrete-event Simulation; Dynamic Systems; Intelligent
Diagnostic Methodologies; Large Scale Multi-agent Systems; Multi-agent Simulation;
Visibility; Criticality
1. INTRODUCTION
Manufacturing processes have become more data-driven
and dependent on interconnection of multiple facilities for
efficient decision-making, thus telecom infrastructure is as
pervasive a component of manufacturing industries as the
powergrid and other critical infrastructures. Telecom in-
frastructures are physical networks that support internet,
telephony, and other digital services by facilitating data
transfers between users. Infrastructures are represented by
graphs, with network elements, such as routers or switches,
as nodes and connections as edges. In such flow networks,
data packet congestion at a node may cause node failure.
How to monitor the impact of disruptions such as conges-
tion on the network - criticality - is important for network
behaviour control, which must be accurate and fast in net-
works that are functioning near capacity, as expected for
future backbone networks (Moura and Hutchison (2019)).
In centralised network monitoring and criticality computa-
tion, the central computational resources need information
This research was supported by the EPSRC and BT Prosperity
Partnership project: Next Generation Converged Digital Infrastruc-
ture, grant number EP/R004935/1, and the UK Engineering and
Physical Sciences Research Council (EPSRC) Doctoral Training
Partnership Award for the University of Cambridge, grant number
EP/R513180/1.
on the whole network, creating a criticality measure (CM)
(Salazar et al. (2016); Fang and Zio (2013)), requiring live
and dynamic node topology and attribute data to respond
to behavioural shifts. The increased data cause longer
computational times, so conclusions arrived at a given
point in time lose relevance, and promote critical events if
too much data is transferred. The amount of data used for
a CM should thus be minimised while preserving meaning,
achievable by imposing a limited bound around a given
node, and computing criticality in a decentralised manner.
This shortens data paths and reduces complexity by taking
information from a small region around a given node. We
call these information bounded CMs (IB-CMs). This IB-
CM can be approximated with a centrality measure used
as a criticality estimate (CE), as both define importance
within a multicomponent system (Birnbaum (1968)). The
more accurate and more efficiently computed an IB-CM is,
the more valuable the information used in its calculation
is, with the value of information is a multivariate measure
composed of the accuracy and efficiency of a given IB-CM
under some information provision.
This paper builds on Proselkov et al. (2020) to outline
a method to assess the accuracy and computational effi-
ciency of IB-CMs that change with time, with respect to
a novel benchmark estimate of dynamic criticality under
The value of information for dynamic
decentralised criticality computation
Yaniv ProselkovManuel Herrera∗∗
Marco Perez Hernandez∗∗∗ Ajith Kumar Parlikad∗∗∗∗
Alexandra Brintrup
Institute for Manufacturing, Dept. of Engineering, University of
Cambridge
yp289@cam.ac.uk; ∗∗amh226@cam.ac.uk; ∗∗∗mep53@cam.ac.uk;
∗∗∗∗aknp2@cam.ac.uk; ab702@cam.ac.uk
Abstract: Smart manufacturing uses advanced data-driven solutions to improve performance
and operations resilience requiring large amounts of data delivered quickly, enabled by telecom
networks and network elements such as routers or switches. Disruptions can render a network
inoperable; avoiding them requires advanced responsiveness to network usage, achievable by
embedding autonomy into the network, providing fast and scalable algorithms that use key
metrics to manage disruptions, such as impact of failure in a network element on system
functions. Centralised approaches are insufficient for this as they need time to transmit data
to the controller, by which time it may have become irrelevant. Decentralised and information
bounded measures solve this by placing computational agents near the data source. We propose
an agent-based model to assess the value of the information for calculating decentralised
criticality metrics, assigning a data collection agent to each network element, computing relevant
indicators of the impact of failure in a decentralised way. This is evaluated by simulating
discrete information exchange with concurrent data analysis, comparing measure accuracy to a
benchmark, and with measure computation time as a proxy for computation complexity. Results
show losses in accuracy are offset by faster computations with fewer network dependencies.
Keywords: Computational Science; Discrete-event Simulation; Dynamic Systems; Intelligent
Diagnostic Methodologies; Large Scale Multi-agent Systems; Multi-agent Simulation;
Visibility; Criticality
1. INTRODUCTION
Manufacturing processes have become more data-driven
and dependent on interconnection of multiple facilities for
efficient decision-making, thus telecom infrastructure is as
pervasive a component of manufacturing industries as the
powergrid and other critical infrastructures. Telecom in-
frastructures are physical networks that support internet,
telephony, and other digital services by facilitating data
transfers between users. Infrastructures are represented by
graphs, with network elements, such as routers or switches,
as nodes and connections as edges. In such flow networks,
data packet congestion at a node may cause node failure.
How to monitor the impact of disruptions such as conges-
tion on the network - criticality - is important for network
behaviour control, which must be accurate and fast in net-
works that are functioning near capacity, as expected for
future backbone networks (Moura and Hutchison (2019)).
In centralised network monitoring and criticality computa-
tion, the central computational resources need information
This research was supported by the EPSRC and BT Prosperity
Partnership project: Next Generation Converged Digital Infrastruc-
ture, grant number EP/R004935/1, and the UK Engineering and
Physical Sciences Research Council (EPSRC) Doctoral Training
Partnership Award for the University of Cambridge, grant number
EP/R513180/1.
on the whole network, creating a criticality measure (CM)
(Salazar et al. (2016); Fang and Zio (2013)), requiring live
and dynamic node topology and attribute data to respond
to behavioural shifts. The increased data cause longer
computational times, so conclusions arrived at a given
point in time lose relevance, and promote critical events if
too much data is transferred. The amount of data used for
a CM should thus be minimised while preserving meaning,
achievable by imposing a limited bound around a given
node, and computing criticality in a decentralised manner.
This shortens data paths and reduces complexity by taking
information from a small region around a given node. We
call these information bounded CMs (IB-CMs). This IB-
CM can be approximated with a centrality measure used
as a criticality estimate (CE), as both define importance
within a multicomponent system (Birnbaum (1968)). The
more accurate and more efficiently computed an IB-CM is,
the more valuable the information used in its calculation
is, with the value of information is a multivariate measure
composed of the accuracy and efficiency of a given IB-CM
under some information provision.
This paper builds on Proselkov et al. (2020) to outline
a method to assess the accuracy and computational effi-
ciency of IB-CMs that change with time, with respect to
a novel benchmark estimate of dynamic criticality under
The value of information for dynamic
decentralised criticality computation
Yaniv ProselkovManuel Herrera∗∗
Marco Perez Hernandez∗∗∗ Ajith Kumar Parlikad∗∗∗∗
Alexandra Brintrup
Institute for Manufacturing, Dept. of Engineering, University of
Cambridge
yp289@cam.ac.uk; ∗∗amh226@cam.ac.uk; ∗∗∗mep53@cam.ac.uk;
∗∗∗∗aknp2@cam.ac.uk; ab702@cam.ac.uk
Abstract: Smart manufacturing uses advanced data-driven solutions to improve performance
and operations resilience requiring large amounts of data delivered quickly, enabled by telecom
networks and network elements such as routers or switches. Disruptions can render a network
inoperable; avoiding them requires advanced responsiveness to network usage, achievable by
embedding autonomy into the network, providing fast and scalable algorithms that use key
metrics to manage disruptions, such as impact of failure in a network element on system
functions. Centralised approaches are insufficient for this as they need time to transmit data
to the controller, by which time it may have become irrelevant. Decentralised and information
bounded measures solve this by placing computational agents near the data source. We propose
an agent-based model to assess the value of the information for calculating decentralised
criticality metrics, assigning a data collection agent to each network element, computing relevant
indicators of the impact of failure in a decentralised way. This is evaluated by simulating
discrete information exchange with concurrent data analysis, comparing measure accuracy to a
benchmark, and with measure computation time as a proxy for computation complexity. Results
show losses in accuracy are offset by faster computations with fewer network dependencies.
Keywords: Computational Science; Discrete-event Simulation; Dynamic Systems; Intelligent
Diagnostic Methodologies; Large Scale Multi-agent Systems; Multi-agent Simulation;
Visibility; Criticality
1. INTRODUCTION
Manufacturing processes have become more data-driven
and dependent on interconnection of multiple facilities for
efficient decision-making, thus telecom infrastructure is as
pervasive a component of manufacturing industries as the
powergrid and other critical infrastructures. Telecom in-
frastructures are physical networks that support internet,
telephony, and other digital services by facilitating data
transfers between users. Infrastructures are represented by
graphs, with network elements, such as routers or switches,
as nodes and connections as edges. In such flow networks,
data packet congestion at a node may cause node failure.
How to monitor the impact of disruptions such as conges-
tion on the network - criticality - is important for network
behaviour control, which must be accurate and fast in net-
works that are functioning near capacity, as expected for
future backbone networks (Moura and Hutchison (2019)).
In centralised network monitoring and criticality computa-
tion, the central computational resources need information
This research was supported by the EPSRC and BT Prosperity
Partnership project: Next Generation Converged Digital Infrastruc-
ture, grant number EP/R004935/1, and the UK Engineering and
Physical Sciences Research Council (EPSRC) Doctoral Training
Partnership Award for the University of Cambridge, grant number
EP/R513180/1.
on the whole network, creating a criticality measure (CM)
(Salazar et al. (2016); Fang and Zio (2013)), requiring live
and dynamic node topology and attribute data to respond
to behavioural shifts. The increased data cause longer
computational times, so conclusions arrived at a given
point in time lose relevance, and promote critical events if
too much data is transferred. The amount of data used for
a CM should thus be minimised while preserving meaning,
achievable by imposing a limited bound around a given
node, and computing criticality in a decentralised manner.
This shortens data paths and reduces complexity by taking
information from a small region around a given node. We
call these information bounded CMs (IB-CMs). This IB-
CM can be approximated with a centrality measure used
as a criticality estimate (CE), as both define importance
within a multicomponent system (Birnbaum (1968)). The
more accurate and more efficiently computed an IB-CM is,
the more valuable the information used in its calculation
is, with the value of information is a multivariate measure
composed of the accuracy and efficiency of a given IB-CM
under some information provision.
This paper builds on Proselkov et al. (2020) to outline
a method to assess the accuracy and computational effi-
ciency of IB-CMs that change with time, with respect to
a novel benchmark estimate of dynamic criticality under
The value of information for dynamic
decentralised criticality computation
Yaniv ProselkovManuel Herrera∗∗
Marco Perez Hernandez∗∗∗ Ajith Kumar Parlikad∗∗∗∗
Alexandra Brintrup
Institute for Manufacturing, Dept. of Engineering, University of
Cambridge
yp289@cam.ac.uk; ∗∗amh226@cam.ac.uk; ∗∗∗mep53@cam.ac.uk;
∗∗∗∗aknp2@cam.ac.uk; ab702@cam.ac.uk
Abstract: Smart manufacturing uses advanced data-driven solutions to improve performance
and operations resilience requiring large amounts of data delivered quickly, enabled by telecom
networks and network elements such as routers or switches. Disruptions can render a network
inoperable; avoiding them requires advanced responsiveness to network usage, achievable by
embedding autonomy into the network, providing fast and scalable algorithms that use key
metrics to manage disruptions, such as impact of failure in a network element on system
functions. Centralised approaches are insufficient for this as they need time to transmit data
to the controller, by which time it may have become irrelevant. Decentralised and information
bounded measures solve this by placing computational agents near the data source. We propose
an agent-based model to assess the value of the information for calculating decentralised
criticality metrics, assigning a data collection agent to each network element, computing relevant
indicators of the impact of failure in a decentralised way. This is evaluated by simulating
discrete information exchange with concurrent data analysis, comparing measure accuracy to a
benchmark, and with measure computation time as a proxy for computation complexity. Results
show losses in accuracy are offset by faster computations with fewer network dependencies.
Keywords: Computational Science; Discrete-event Simulation; Dynamic Systems; Intelligent
Diagnostic Methodologies; Large Scale Multi-agent Systems; Multi-agent Simulation;
Visibility; Criticality
1. INTRODUCTION
Manufacturing processes have become more data-driven
and dependent on interconnection of multiple facilities for
efficient decision-making, thus telecom infrastructure is as
pervasive a component of manufacturing industries as the
powergrid and other critical infrastructures. Telecom in-
frastructures are physical networks that support internet,
telephony, and other digital services by facilitating data
transfers between users. Infrastructures are represented by
graphs, with network elements, such as routers or switches,
as nodes and connections as edges. In such flow networks,
data packet congestion at a node may cause node failure.
How to monitor the impact of disruptions such as conges-
tion on the network - criticality - is important for network
behaviour control, which must be accurate and fast in net-
works that are functioning near capacity, as expected for
future backbone networks (Moura and Hutchison (2019)).
In centralised network monitoring and criticality computa-
tion, the central computational resources need information
This research was supported by the EPSRC and BT Prosperity
Partnership project: Next Generation Converged Digital Infrastruc-
ture, grant number EP/R004935/1, and the UK Engineering and
Physical Sciences Research Council (EPSRC) Doctoral Training
Partnership Award for the University of Cambridge, grant number
EP/R513180/1.
on the whole network, creating a criticality measure (CM)
(Salazar et al. (2016); Fang and Zio (2013)), requiring live
and dynamic node topology and attribute data to respond
to behavioural shifts. The increased data cause longer
computational times, so conclusions arrived at a given
point in time lose relevance, and promote critical events if
too much data is transferred. The amount of data used for
a CM should thus be minimised while preserving meaning,
achievable by imposing a limited bound around a given
node, and computing criticality in a decentralised manner.
This shortens data paths and reduces complexity by taking
information from a small region around a given node. We
call these information bounded CMs (IB-CMs). This IB-
CM can be approximated with a centrality measure used
as a criticality estimate (CE), as both define importance
within a multicomponent system (Birnbaum (1968)). The
more accurate and more efficiently computed an IB-CM is,
the more valuable the information used in its calculation
is, with the value of information is a multivariate measure
composed of the accuracy and efficiency of a given IB-CM
under some information provision.
This paper builds on Proselkov et al. (2020) to outline
a method to assess the accuracy and computational effi-
ciency of IB-CMs that change with time, with respect to
a novel benchmark estimate of dynamic criticality under
The value of information for dynamic
decentralised criticality computation
Yaniv ProselkovManuel Herrera∗∗
Marco Perez Hernandez∗∗∗ Ajith Kumar Parlikad∗∗∗∗
Alexandra Brintrup
Institute for Manufacturing, Dept. of Engineering, University of
Cambridge
yp289@cam.ac.uk; ∗∗amh226@cam.ac.uk; ∗∗∗mep53@cam.ac.uk;
∗∗∗∗aknp2@cam.ac.uk; ab702@cam.ac.uk
Abstract: Smart manufacturing uses advanced data-driven solutions to improve performance
and operations resilience requiring large amounts of data delivered quickly, enabled by telecom
networks and network elements such as routers or switches. Disruptions can render a network
inoperable; avoiding them requires advanced responsiveness to network usage, achievable by
embedding autonomy into the network, providing fast and scalable algorithms that use key
metrics to manage disruptions, such as impact of failure in a network element on system
functions. Centralised approaches are insufficient for this as they need time to transmit data
to the controller, by which time it may have become irrelevant. Decentralised and information
bounded measures solve this by placing computational agents near the data source. We propose
an agent-based model to assess the value of the information for calculating decentralised
criticality metrics, assigning a data collection agent to each network element, computing relevant
indicators of the impact of failure in a decentralised way. This is evaluated by simulating
discrete information exchange with concurrent data analysis, comparing measure accuracy to a
benchmark, and with measure computation time as a proxy for computation complexity. Results
show losses in accuracy are offset by faster computations with fewer network dependencies.
Keywords: Computational Science; Discrete-event Simulation; Dynamic Systems; Intelligent
Diagnostic Methodologies; Large Scale Multi-agent Systems; Multi-agent Simulation;
Visibility; Criticality
1. INTRODUCTION
Manufacturing processes have become more data-driven
and dependent on interconnection of multiple facilities for
efficient decision-making, thus telecom infrastructure is as
pervasive a component of manufacturing industries as the
powergrid and other critical infrastructures. Telecom in-
frastructures are physical networks that support internet,
telephony, and other digital services by facilitating data
transfers between users. Infrastructures are represented by
graphs, with network elements, such as routers or switches,
as nodes and connections as edges. In such flow networks,
data packet congestion at a node may cause node failure.
How to monitor the impact of disruptions such as conges-
tion on the network - criticality - is important for network
behaviour control, which must be accurate and fast in net-
works that are functioning near capacity, as expected for
future backbone networks (Moura and Hutchison (2019)).
In centralised network monitoring and criticality computa-
tion, the central computational resources need information
This research was supported by the EPSRC and BT Prosperity
Partnership project: Next Generation Converged Digital Infrastruc-
ture, grant number EP/R004935/1, and the UK Engineering and
Physical Sciences Research Council (EPSRC) Doctoral Training
Partnership Award for the University of Cambridge, grant number
EP/R513180/1.
on the whole network, creating a criticality measure (CM)
(Salazar et al. (2016); Fang and Zio (2013)), requiring live
and dynamic node topology and attribute data to respond
to behavioural shifts. The increased data cause longer
computational times, so conclusions arrived at a given
point in time lose relevance, and promote critical events if
too much data is transferred. The amount of data used for
a CM should thus be minimised while preserving meaning,
achievable by imposing a limited bound around a given
node, and computing criticality in a decentralised manner.
This shortens data paths and reduces complexity by taking
information from a small region around a given node. We
call these information bounded CMs (IB-CMs). This IB-
CM can be approximated with a centrality measure used
as a criticality estimate (CE), as both define importance
within a multicomponent system (Birnbaum (1968)). The
more accurate and more efficiently computed an IB-CM is,
the more valuable the information used in its calculation
is, with the value of information is a multivariate measure
composed of the accuracy and efficiency of a given IB-CM
under some information provision.
This paper builds on Proselkov et al. (2020) to outline
a method to assess the accuracy and computational effi-
ciency of IB-CMs that change with time, with respect to
a novel benchmark estimate of dynamic criticality under
The value of information for dynamic
decentralised criticality computation
Yaniv ProselkovManuel Herrera∗∗
Marco Perez Hernandez∗∗∗ Ajith Kumar Parlikad∗∗∗∗
Alexandra Brintrup
Institute for Manufacturing, Dept. of Engineering, University of
Cambridge
yp289@cam.ac.uk; ∗∗amh226@cam.ac.uk; ∗∗∗mep53@cam.ac.uk;
∗∗∗∗aknp2@cam.ac.uk; ab702@cam.ac.uk
Abstract: Smart manufacturing uses advanced data-driven solutions to improve performance
and operations resilience requiring large amounts of data delivered quickly, enabled by telecom
networks and network elements such as routers or switches. Disruptions can render a network
inoperable; avoiding them requires advanced responsiveness to network usage, achievable by
embedding autonomy into the network, providing fast and scalable algorithms that use key
metrics to manage disruptions, such as impact of failure in a network element on system
functions. Centralised approaches are insufficient for this as they need time to transmit data
to the controller, by which time it may have become irrelevant. Decentralised and information
bounded measures solve this by placing computational agents near the data source. We propose
an agent-based model to assess the value of the information for calculating decentralised
criticality metrics, assigning a data collection agent to each network element, computing relevant
indicators of the impact of failure in a decentralised way. This is evaluated by simulating
discrete information exchange with concurrent data analysis, comparing measure accuracy to a
benchmark, and with measure computation time as a proxy for computation complexity. Results
show losses in accuracy are offset by faster computations with fewer network dependencies.
Keywords: Computational Science; Discrete-event Simulation; Dynamic Systems; Intelligent
Diagnostic Methodologies; Large Scale Multi-agent Systems; Multi-agent Simulation;
Visibility; Criticality
1. INTRODUCTION
Manufacturing processes have become more data-driven
and dependent on interconnection of multiple facilities for
efficient decision-making, thus telecom infrastructure is as
pervasive a component of manufacturing industries as the
powergrid and other critical infrastructures. Telecom in-
frastructures are physical networks that support internet,
telephony, and other digital services by facilitating data
transfers between users. Infrastructures are represented by
graphs, with network elements, such as routers or switches,
as nodes and connections as edges. In such flow networks,
data packet congestion at a node may cause node failure.
How to monitor the impact of disruptions such as conges-
tion on the network - criticality - is important for network
behaviour control, which must be accurate and fast in net-
works that are functioning near capacity, as expected for
future backbone networks (Moura and Hutchison (2019)).
In centralised network monitoring and criticality computa-
tion, the central computational resources need information
This research was supported by the EPSRC and BT Prosperity
Partnership project: Next Generation Converged Digital Infrastruc-
ture, grant number EP/R004935/1, and the UK Engineering and
Physical Sciences Research Council (EPSRC) Doctoral Training
Partnership Award for the University of Cambridge, grant number
EP/R513180/1.
on the whole network, creating a criticality measure (CM)
(Salazar et al. (2016); Fang and Zio (2013)), requiring live
and dynamic node topology and attribute data to respond
to behavioural shifts. The increased data cause longer
computational times, so conclusions arrived at a given
point in time lose relevance, and promote critical events if
too much data is transferred. The amount of data used for
a CM should thus be minimised while preserving meaning,
achievable by imposing a limited bound around a given
node, and computing criticality in a decentralised manner.
This shortens data paths and reduces complexity by taking
information from a small region around a given node. We
call these information bounded CMs (IB-CMs). This IB-
CM can be approximated with a centrality measure used
as a criticality estimate (CE), as both define importance
within a multicomponent system (Birnbaum (1968)). The
more accurate and more efficiently computed an IB-CM is,
the more valuable the information used in its calculation
is, with the value of information is a multivariate measure
composed of the accuracy and efficiency of a given IB-CM
under some information provision.
This paper builds on Proselkov et al. (2020) to outline
a method to assess the accuracy and computational effi-
ciency of IB-CMs that change with time, with respect to
a novel benchmark estimate of dynamic criticality under
different communication paradigms (CPs). These IB-CMs
are designed for homogeneous flow networks. A prototype
is presented that uses classic centrality measures as a stand
in for CMs and IB-CMs, with real network topology on the
use case of a telecom simulation model.
2. LITERATURE REVIEW
Network topology affects routing and resilience to disrup-
tion since shorter distances give quicker transfers. Criti-
cality, defined as the impact of a node’s inactivity on the
operation of a network, evaluated by network connectivity
in telecoms, is a key factor in understanding network re-
silience (L¨u et al. (2016); Herrera et al. (2020)), and can be
estimated using the current network state (Proselkov et al.
(2020)). Criticality can inform prioritisation in network
prognostics for proactive maintenance. Many criticality
measures are based on centrality measures, including be-
tweenness (Freeman (1977)); eigencentrality; and degree
centrality. The first two are centralised, needing each node
to take information from all nodes. Degree centrality needs
each node to know the number of their neighbours.
Efficient decentralised computation approaches for under-
standing network criticality are important for networks
operating under stress, (Cetinkaya and Sterbenz (2013)).
Cascade failure may also occur within regularly functional
systems due to random errors, as in January 1990, where
114 switching nodes of the AT&T network successively
went down due to a wrong reset signal (Neumann (1995)).
Nodes within telecoms networks provide information of
their state either by transmitting to a supervisory node,
which facilitates centralised centrality calculation, or with
each other, which facilitates distributed centrality cal-
culation. They can achieve decentralised communication
through broadcasting to all neighbours their node ID, the
value, and topological information including the travel
history of the data packet, and previously broadcasted
packets that are known to remain in motion, (Lehmann
and Kaufmann (2003)). This takes at least the minimum
distance between two nodes to be completed in reality.
Experimental evidence suggests increased computational
efficiency and satisfactory performance of information
bounded network measures as in Ercsey-Ravasz and
Toroczkai (2010). This details the relationship of the depth
of the information bound and size of the value distribution
of the associated bounded betweenness measures. The
value distribution increases exponentially with the depth
of the bound up to mean geodesic length before decreasing,
suggesting meaningful sensitivity at the mean geodesic
length. Tests were conducted on scale-free and random
graphs. These only have one cluster, so it is expected that
the ideal depth may be the mean cluster geodesic length.
Other papers give examples of limited range criticality
and centrality for static measures (Wehmuth and Ziviani
(2011); Chen et al. (2012); Nanda and Kotz (2008); Ker-
marrec et al. (2011); Dinh et al. (2010); Proselkov et al.
(2020)) and for dynamic distributed criticality measures.
All show accuracy despite limited boundaries. However,
no large scale analysis on the relative efficiency via com-
putation time and accuracy has not yet been conducted
for dynamic criticality measures.
3. METHODS
3.1 Telecom simulation model
The network topology is generated, creating the graph,
G=(V,E) where Vis nodes and Eis edges. The discrete
information packet exchange simulator is then run 1with
short range dependence, meaning random nodes generate
data packets independently according to a Poisson distri-
bution with random destinations (Veres and Boda (2005)).
As this is an information flow network, a timestep is how
long it takes for information to traverse one edge. Packets
traverse the network, stored and routed along nodes on
the way to their destination, where they are removed from
the system. Nodes and edges each have a fixed capacity
which gets filled up over time since it takes time to process
packets at nodes and transmit them between nodes, with
processing and transmission time as fixed model input
parameters. This process is terminated after either a fixed
number of timesteps or until the network is too congested
to function. The nodes each have a backlog capacity of
φ. The simulation produces a time series over Tof each
node’s queued up data packet backlog, where the size of
the queue held by node uVat time tTis φt
u.
We then examine how nodes would behave if they were
receiving and processing network data in real time in an
agent based simulation 2, an independent agent situated at
each node. Depending on our monitoring data CP, which
determines how up-to-date information is (currentness),
different nodes get different information regarding others
depending on their position in the network. We investigate
three CPs, named instant,constant, and periodic.
For a pair of nodes, u, v V, there is a path puv
Gfrom uto vif, for some nZ+, there exists an
ordered sequence of nodes, (u, (ui)n
i=0,v)Vwith edges
{uu0,(uiui+1)n1
i=0 ,u
nv}⊆E, or uv E. If there is a path
from uto v,ugets information about φt
v. The queue at
time tthat the agent at ubelieves vhas is the perceived
queue,qt
uv Q. If u=vthis is qt
u. Centrality calculation
takes time so the time from transmission to output is
always greater than transmission to receipt between nodes,
thus nodes must compute centralities at a lower frequency
than the CP dictates to avoid losing currentness. This
period between calculations is µ, the monitor interval.
Instant communication is a simplification, assuming mon-
itoring data is transferred instantly, where for all u, v V,
qt
uv =qt
v. This is a base case, only achievable if monitoring
data transfer became so fast as to be insignificant.
Constant communication has nodes declare their queues
every timestep, This data traverses the network normally,
because this declaration is a low bandwidth operation.
For shortest path (geodesic) of length nbetween uand
v, information from vtakes ntimesteps to reach u, so u
perceives vntimesteps late, so qt
uv =qtn
v, so distant
nodes give less accurate, valuable or relevant information.
Periodic communication has nodes declare their queues
and perceived queues at the same frequency as they calcu-
late their centrality. This corresponds to some aggregation
1using a Python package called “Anx” (Likic and Shafi (2018)),
2using a Python package called “Mesa”, (Kazil et al. (2020))
410 Yaniv Proselkov et al. / IFAC PapersOnLine 55-2 (2022) 408–413
of the functions within higher order control functions that
use centralities as inputs. For monitor interval µand uand
vwith length ngeodesic, qt
uv =qtµn
v, as each queue pass
is µtimesteps.
With each CP, nodes receive perceived queues of others in
the network. These values are used to inform dynamic,
queue dependant CEs, which are calculated with the
adjacency matrix, and so with respect to edge weight
rather than node weight, which we assign according to the
following steps. First the graph is redefined as directed,
such that for (uv),(vu)E,(uv)=(vu). For a node u,
for all vΓ1(u), the weight of edge (uv) is
(uv)t
q=qt
u/|Γ1(u)|,(1)
because larger neighbourhoods give nodes more chances to
emit data packets and distribute load among them, and it
accounts for the respective queues of node pairs since for
all (uv) there exists a (vu) produced under the same rules.
The following subsection describes the data analysis car-
ried out with the data provided according to each CP.
3.2 Centrality Measure CEs
According to each of the above CPs, the data delivered to
each agent situated at a node is used to compute centrality
measures over time as proxies and estimates of criticality.
In this initial study standard centrality measures are used,
with weighted and bounded extensions. The unweighted
measures only take topological data, whereas weighted
measures adjust their outputs according to the perceived
queues for each node. Unbounded, or sociocentric, mea-
sures take information from the whole network and stand
in for CMs, while bounded, or egocentric measures take
information from a limited region around a given node
and stand in for IB-CMs.
We define the information boundary around a node by
the geodesic distance. For a node, uV, the set of
nodes iedges away is Γi(u)V, where Γ1(u) is the
neighbourhood of u. The set of nodes at most iedges away
from uis Hi(u)=i
j=1 Γj(u). If uhas an information
boundary at distance i, it takes information from Hi(u).
Degree Centrality: Unweighted Degree Centrality for a
node ucounts the number of neighbours. It is defined as
Cu
d(u)t=Cu
d(u)=|Γ1(u)|.
Weighted Degree Centrality counts each node as many
times as their perceived queue lengths. It is dynamic and
defined as Cw
d(u)t=vΓ1(u)qt
uv.
Betweenness Centrality: All distinct paths with the
same length and the minimum number of elements are
geodesics. The number of geodesics from vto wis ρv,w :
VZ+, and the number of geodesics from vto wpassing
through uis ρv,w|u:VZ+.
Unweighted Sociocentric Betweenness Centrality (Freeman
(1977)), tracks pathway disruption potential. It is static,
calculating the fraction of shortest paths between all node
pairs passing through the subject node.
Unweighted Egocentric Betweenness Centrality measures
the betweenness of a bounded region surrounding a node.
It correlates strongly with sociocentric betweenness (Mars-
den (2002)), and is computable in a decentralised manner.
For node uVit measures the betweenness of the induced
subgraph of Hi(u), such that
Cue
b(u)t=Cue
b(u)=v,wHi(u)ρv,w |uv,w .
Weighted Sociocentric Betweenness Centrality uses a
weighted shortest path parameter. Pvw is the set of short-
est paths between nodes vand w, and using Eqn. (1) the
CE is defined as
ωt
v,w =pvwPvw (st)pvw ,(st)E(st)t
q;
ωt
v,w|u=upvw Pv,w (st)pvw ,(st)E(st)t
q,(2)
giving weighted sociocentric betweenness centrality as
Cws
b(u)t=v,wV ,u=v=wωt
v,w|ut
v,w .(3)
Weighted Egocentric Betweenness Centrality takes Eqn.
(3) but over Hi(u).
Eigencentrality: Unweighted Sociocentric Eigencentral-
ity captures the connectivity of the network, valuing nodes
with more connections to well connected nodes. The num-
ber of edges between nodes uiand ujis ai,j , displayable in
a matrix, AGMn({0,1}), the adjacency matrix, where
AG=(ai,j )=1,(i, j)E
0,i=j,
for the matrix, G. The eigencentralities of the nodes in the
network are found for the largest eigenvalue, λG, with
AGx=λGx,(4)
and the CE is the solution to Eqn. (4), numerically solved
via power iteration, or Von Mises iteration (von Mises and
Pollaczek-Geiringer (1929)).
Unweighted Egocentric Eigencentrality is the solution to
Eqn. (4) over Hi(u) rather than over G.
Weighted Sociocentric Eigencentrality uses the directed
network with edge weights as defined by Eqn. (1). The
adjacency matrix becomes dynamic and temporally de-
pendant, such that for At
GMn(Z+),
At
G=(ai,j )=(uiuj)t
q,(i, j)E
0,i=j. (5)
AGin Eqn. (4) is then replaced by At
Gin Eqn. (5).
Weighted Egocentric Eigencentrality uses At
Gfrom Eqn.
(5). For node ujit is over Hi(uj), not G, creating
Cwe
e(uj)t=(At
Hi(u)x)j=(λt
Hi(u)x)j.
These CEs will be used as proxies for criticality measures.
To compute the value of information as processed through
each measure, we now outline a validation method.
3.3 Validation Method
The measures above must be validated as correctly ap-
proximating dynamic criticality within the network. A
validation function must determine at any timestep the
similarity of our CE to a benchmark and its period of
relevance. Criticality measures the impact of failure, which
must be defined, and for how long the effects of some action
can be said to have been caused by a previous one. Analysis
is conducted post hoc, using data that is neither limited by
Yaniv Proselkov et al. / IFAC PapersOnLine 55-2 (2022) 408–413 411
of the functions within higher order control functions that
use centralities as inputs. For monitor interval µand uand
vwith length ngeodesic, qt
uv =qtµn
v, as each queue pass
is µtimesteps.
With each CP, nodes receive perceived queues of others in
the network. These values are used to inform dynamic,
queue dependant CEs, which are calculated with the
adjacency matrix, and so with respect to edge weight
rather than node weight, which we assign according to the
following steps. First the graph is redefined as directed,
such that for (uv),(vu)E,(uv)=(vu). For a node u,
for all vΓ1(u), the weight of edge (uv) is
(uv)t
q=qt
u/|Γ1(u)|,(1)
because larger neighbourhoods give nodes more chances to
emit data packets and distribute load among them, and it
accounts for the respective queues of node pairs since for
all (uv) there exists a (vu) produced under the same rules.
The following subsection describes the data analysis car-
ried out with the data provided according to each CP.
3.2 Centrality Measure CEs
According to each of the above CPs, the data delivered to
each agent situated at a node is used to compute centrality
measures over time as proxies and estimates of criticality.
In this initial study standard centrality measures are used,
with weighted and bounded extensions. The unweighted
measures only take topological data, whereas weighted
measures adjust their outputs according to the perceived
queues for each node. Unbounded, or sociocentric, mea-
sures take information from the whole network and stand
in for CMs, while bounded, or egocentric measures take
information from a limited region around a given node
and stand in for IB-CMs.
We define the information boundary around a node by
the geodesic distance. For a node, uV, the set of
nodes iedges away is Γi(u)V, where Γ1(u) is the
neighbourhood of u. The set of nodes at most iedges away
from uis Hi(u)=i
j=1 Γj(u). If uhas an information
boundary at distance i, it takes information from Hi(u).
Degree Centrality: Unweighted Degree Centrality for a
node ucounts the number of neighbours. It is defined as
Cu
d(u)t=Cu
d(u)=|Γ1(u)|.
Weighted Degree Centrality counts each node as many
times as their perceived queue lengths. It is dynamic and
defined as Cw
d(u)t=vΓ1(u)qt
uv.
Betweenness Centrality: All distinct paths with the
same length and the minimum number of elements are
geodesics. The number of geodesics from vto wis ρv,w :
VZ+, and the number of geodesics from vto wpassing
through uis ρv,w|u:VZ+.
Unweighted Sociocentric Betweenness Centrality (Freeman
(1977)), tracks pathway disruption potential. It is static,
calculating the fraction of shortest paths between all node
pairs passing through the subject node.
Unweighted Egocentric Betweenness Centrality measures
the betweenness of a bounded region surrounding a node.
It correlates strongly with sociocentric betweenness (Mars-
den (2002)), and is computable in a decentralised manner.
For node uVit measures the betweenness of the induced
subgraph of Hi(u), such that
Cue
b(u)t=Cue
b(u)=v,wHi(u)ρv,w |uv,w .
Weighted Sociocentric Betweenness Centrality uses a
weighted shortest path parameter. Pvw is the set of short-
est paths between nodes vand w, and using Eqn. (1) the
CE is defined as
ωt
v,w =pvwPvw (st)pvw ,(st)E(st)t
q;
ωt
v,w|u=upvw Pv,w (st)pvw ,(st)E(st)t
q,(2)
giving weighted sociocentric betweenness centrality as
Cws
b(u)t=v,wV ,u=v=wωt
v,w|ut
v,w .(3)
Weighted Egocentric Betweenness Centrality takes Eqn.
(3) but over Hi(u).
Eigencentrality: Unweighted Sociocentric Eigencentral-
ity captures the connectivity of the network, valuing nodes
with more connections to well connected nodes. The num-
ber of edges between nodes uiand ujis ai,j , displayable in
a matrix, AGMn({0,1}), the adjacency matrix, where
AG=(ai,j )=1,(i, j)E
0,i=j,
for the matrix, G. The eigencentralities of the nodes in the
network are found for the largest eigenvalue, λG, with
AGx=λGx,(4)
and the CE is the solution to Eqn. (4), numerically solved
via power iteration, or Von Mises iteration (von Mises and
Pollaczek-Geiringer (1929)).
Unweighted Egocentric Eigencentrality is the solution to
Eqn. (4) over Hi(u) rather than over G.
Weighted Sociocentric Eigencentrality uses the directed
network with edge weights as defined by Eqn. (1). The
adjacency matrix becomes dynamic and temporally de-
pendant, such that for At
GMn(Z+),
At
G=(ai,j )=(uiuj)t
q,(i, j)E
0,i=j. (5)
AGin Eqn. (4) is then replaced by At
Gin Eqn. (5).
Weighted Egocentric Eigencentrality uses At
Gfrom Eqn.
(5). For node ujit is over Hi(uj), not G, creating
Cwe
e(uj)t=(At
Hi(u)x)j=(λt
Hi(u)x)j.
These CEs will be used as proxies for criticality measures.
To compute the value of information as processed through
each measure, we now outline a validation method.
3.3 Validation Method
The measures above must be validated as correctly ap-
proximating dynamic criticality within the network. A
validation function must determine at any timestep the
similarity of our CE to a benchmark and its period of
relevance. Criticality measures the impact of failure, which
must be defined, and for how long the effects of some action
can be said to have been caused by a previous one. Analysis
is conducted post hoc, using data that is neither limited by
the imperceptibility of the future nor communication con-
straints. We take linear functions of the total queue sizes
of the whole network, using Φt=uVφt
u. We also find
a time range for which we have sufficient confidence that
all network states are sufficiently dependant on eachother.
Ideal Time Horizon This is a moving time window,
bisected by the present timestep, where the window’s start
sufficiently influenced all timesteps up to the present, and
the present will sufficiently influence all timesteps up to
the window’s end. With it, we can find how far must a CE
look into the future to sufficiently capture both the current
network state and its influence. We iterate over a fixed
number, htest, of time horizon windows, hi, less than half
the simulation length, tmax, where hi=itmax/(2htest ),
and take moving averages over Φtfor each width hi, so
MAt
Φ;hi=t
tiΦt/i t i;
t < i,
and MAΦ;hiis the time series made up by MAt
Φ;hi. Then
for all tsuch that MAt
Φ;hiexists, we take the absolute
difference between MAt
Φ;hiand Φt, such that
MADt
i=|MAt
Φ;hiΦt|ti;
t < i,
and get the sum of absolute differences, SADi=tMADt
i.
Normalised, this is NSADi= SADi/maxhtest
i=1 SADi. It-
erating through NSADiin ascending i, we obtain gi=
htest(NSADi+1 NSADi). The ideal time horizon is where
the relative gain in error by a wider window is large enough
to suggest that all smaller window sizes cover regions
with significant influence over eachother. Beyond that,
since error gain slows down, one cannot confidently claim
events are the direct consequence of the current time. This
confidence, the validation threshold, is an independent
parameter, c, with which we define the ideal time horizon,
hfor the first iwhere one of the following conditions is
fulfilled, where if the last case is reached we must test
more windows or increase the confidence threshold:
h=
i/2gic;
(i1)/2gi<0;
i=htest.
Comparison Accuracy Function We compare CEs to a
benchmark measure of criticality, defined as the change in
network operation induced by any network state changes.
Dependencies are sufficiently large for all timesteps at
most htimesteps far from eachother, so impacts occur
over a meaningful timescale of h. Impact at time tis
the change over htimesteps across tscaled by the built
up queues at time t, since a heavily used system has
more to lose than an underused one. We obtain a moving
average with window width h, MAΦ;hand produce a time
series of scaled differences across a time horizon, THDt=
MAt
Φ;hMAt+h
Φ;hMAth+1
Φ;h. This is normalised to [0,1],
creating NTHDt= (THDtmintTHDt)/(maxtTHDt
mintTHDt), the criticality benchmark.ForaCE,C(u)t,
we calculate the network mean, Ct=uC(u)t, and nor-
malise to get NCt. Let T={τT:τ=kµ, k Z+}. The
error from the benchmark is Errt= NCtNTHDt, and
the root mean squared error is RMSE = t∈T Err2
t/|T |.
The lowest RMSE gives the most accurate measure since
it most closely follows the benchmark criticality.
For any given CE, the value of information, Vfor a
given dataset, D, is the multivariate measure of the re-
ciprocal of the RMSE and the reciprocal of the compu-
tation time, Comp, so that it grows with reduced er-
ror and increased time efficiency, such that V(D; CE) =
((RMSE;D, CE)1,(Comp; D, CE)1).
4. RESULTS AND DISCUSSION
We compared simulation results of instant, constant, and
periodic CPs for the accuracy of decentralised, dynamic,
and information bounded centrality measures for estimat-
ing criticality. Three simulations, one for each CP, using
the real topology of the UK outer backbone infrastructure
network for a UK telecoms service provider (Fig. 1).
Fig. 1. Outer backbone UK infrastructure network for a
large UK service provider
Simulation inputs are in Table 1. We skip 5000 timesteps to
avoid degeneracy since we initialise on an empty network.
Table 1. Simulation Inputs
Runtime tmax (hundreths of a second) 100000 timesteps
Monitor Interval µ250 timesteps
Packet generation rate 19 packets/timestep
Processing delay 13 timesteps
Queue check time 1 timestep
Transmission time 30 timesteps
Node capacity φ128 packets
Link capacity 1024 packets
Information visibility boundary 2 hops
Validation threshold c0.1 relative difference
Window widths tested htest 20 windows
Ideal time horizon h10000 timesteps
Fig. 2 shows plots of all centralities and the criticality
benchmark, NTHDt, for each CP, filtered using a first
order Savitzky-Golay filter. This graph shows substantial
difference between outputs for weighted and unweighted
measures across all CPs, but further analysis will show
similar accuracy. The absolute error, |Errt|, from NTHDt
was calculated, the results plotted in Fig. 3. These plots are
only for weighted measures since error from static values
is a trivial transformation of the criticality benchmark.
We can see the relative accuracy of each curve, showing
similar accuracy between bounded and unbounded mea-
sures. Periodic CP readouts are more closely clustered in
terms of accuracy. Error plots are filtered using a first order
Savitzky-Golay filter.
412 Yaniv Proselkov et al. / IFAC PapersOnLine 55-2 (2022) 408–413
(b) Instant (c) Constant (d) Periodic
Fig. 2. CEs and NTHDts for simulations of each CP.
(b) Instant (c) Constant (d) Periodic
Fig. 3. Error for Weighted CEs and NTHDts for simula-
tions of each CP.
Table 2 shows RMSE from the criticality benchmark for
each CE and each CP. Weighted, dynamic CEs largely per-
formed much better than their static counterparts. Bound-
edness minimally impacted accuracy. Typically, CEs were
most accurate for constant CP, followed by periodic then
instant CPs. Of the weighted bounded CEs, betweenness
was best in periodic CP with large variation between CPs;
eigenvector best in constant CPs; and degree in instant CP,
and much better than periodic and constant, which show
wide difference. This suggests different CEs can be used for
different CPs. No weighted bounded CE had more RMSE
than 0.35. Betweenness was worst in constant CP with
RMSE 0.339, degree best in instant CP with RMSE 0.239.
Eigencentrality and betweenness had similar consistency
with ranges of 0.086 and 0.085 respectively. All values
are similar and low, suggesting combining bounding and
dynamicity creates accurate and scalable CEs.
Table 2. Root Mean Squared Error for each CP
and CE. Blue is the least error, red is the most.
Instant Constant Periodic
Betweenness 0.536 0.412 0.482
Bnd’d. Betweenness 0.491 0.368 0.437
Wtd. Betweenness 0.263 0.27 0.28
Wtd. Bnd’d. Betweenness 0.301 0.339 0.254
Eigenvector 0.44 0.321 0.388
Bnd’d. Eigenvector 0.381 0.268 0.332
Wtd. Eigenvector 0.228 0.212 0.244
Wtd. Bnd’d. Eigenvector 0.327 0.241 0.259
Degree 0.487 0.365 0.433
Wtd. Degree 0.239 0.344 0.273
Boundedness and the time horizon are spacial and tem-
poral efforts to increase relevancy of a given calculation.
A sufficiently small information boundary also reduces
computational complexity, allowing calculations to take
place within the relevant period. In application, the mon-
itor interval should be bounded above by the relevant
period, and is typically bounded below by the computation
time. This motivates analysing computation time of each
measure under each CP. All analyses were completed on
Google Colab Pro, a Jupyter notebook service that pro-
vides a Python 3 Google Compute Engine backend of an
adaptable memory of up to 32GB RAM with 2 virtual
CPUs, Intel(R) Xeon(R) @ 2.20GHz.
Fig. 4 shows computation time plots. Limiting information
most affects weighted betweenness, which unbounded can
take over 0.07 seconds, but bounded may be less than
0.01, close to weighted bounded eigencentrality. Instant
and constant computation times are similar for all mea-
sures, though instant CP shows variability and intermit-
tent spikes during network congestion, where queues grow
due to build-up exceeding processing speed in certain
regions. Periodic CP was uniformly faster, which since it
places a lighter memory load through lower frequency, may
be an artefact of computational stress on the computer
during simulation. Dynamicity increases computation time
for complex measures, but minimally impacts degree cen-
trality, computed nearly instantly. Means for each measure
and CP are shown in Table 3.
(b) Instant (c) Constant (d) Periodic
Fig. 4. Computation time for all CEs for simulations of
each CP, measured in seconds.
Table 3. Mean computation time for all CEs for
simulations of each CP, measured in seconds.
Blue is the fastest, red is the slowest.
Instant Constant Periodic
Betweenness 0.03112 0.03022 0.0245
Bnd’d. Betweenness 0.00676 0.00669 0.00554
Wtd. Betweenness 0.06512 0.06231 0.05248
Wtd. Bnd’d. Betweenness 0.01188 0.01154 0.00951
Eigenvector 0.00738 0.00726 0.00612
Bnd’d. Eigenvector 0.00445 0.0043 0.00357
Wtd. Eigenvector 0.01902 0.01874 0.01523
Wtd. Bnd’d. Eigenvector 0.01084 0.01068 0.00869
Degree 1.30E-05 1.09E-05 5.19E-06
Wtd. Degree 1.83E-05 1.38E-05 6.88E-06
5. CONCLUSION
In this paper, we reviewed network criticality, communi-
cation paradigms, and existing IB-CMs. We introduced a
model of simulated decentralised online network measure-
ment under three CPs. We then defined the IB-CMs we use
in our analysis (actually CEs). The validation method that
determines the error and from there, value of the informa-
tion was then outlined. Simulations for the various CEs
Yaniv Proselkov et al. / IFAC PapersOnLine 55-2 (2022) 408–413 413
(b) Instant (c) Constant (d) Periodic
Fig. 2. CEs and NTHDts for simulations of each CP.
(b) Instant (c) Constant (d) Periodic
Fig. 3. Error for Weighted CEs and NTHDts for simula-
tions of each CP.
Table 2 shows RMSE from the criticality benchmark for
each CE and each CP. Weighted, dynamic CEs largely per-
formed much better than their static counterparts. Bound-
edness minimally impacted accuracy. Typically, CEs were
most accurate for constant CP, followed by periodic then
instant CPs. Of the weighted bounded CEs, betweenness
was best in periodic CP with large variation between CPs;
eigenvector best in constant CPs; and degree in instant CP,
and much better than periodic and constant, which show
wide difference. This suggests different CEs can be used for
different CPs. No weighted bounded CE had more RMSE
than 0.35. Betweenness was worst in constant CP with
RMSE 0.339, degree best in instant CP with RMSE 0.239.
Eigencentrality and betweenness had similar consistency
with ranges of 0.086 and 0.085 respectively. All values
are similar and low, suggesting combining bounding and
dynamicity creates accurate and scalable CEs.
Table 2. Root Mean Squared Error for each CP
and CE. Blue is the least error, red is the most.
Instant Constant Periodic
Betweenness 0.536 0.412 0.482
Bnd’d. Betweenness 0.491 0.368 0.437
Wtd. Betweenness 0.263 0.27 0.28
Wtd. Bnd’d. Betweenness 0.301 0.339 0.254
Eigenvector 0.44 0.321 0.388
Bnd’d. Eigenvector 0.381 0.268 0.332
Wtd. Eigenvector 0.228 0.212 0.244
Wtd. Bnd’d. Eigenvector 0.327 0.241 0.259
Degree 0.487 0.365 0.433
Wtd. Degree 0.239 0.344 0.273
Boundedness and the time horizon are spacial and tem-
poral efforts to increase relevancy of a given calculation.
A sufficiently small information boundary also reduces
computational complexity, allowing calculations to take
place within the relevant period. In application, the mon-
itor interval should be bounded above by the relevant
period, and is typically bounded below by the computation
time. This motivates analysing computation time of each
measure under each CP. All analyses were completed on
Google Colab Pro, a Jupyter notebook service that pro-
vides a Python 3 Google Compute Engine backend of an
adaptable memory of up to 32GB RAM with 2 virtual
CPUs, Intel(R) Xeon(R) @ 2.20GHz.
Fig. 4 shows computation time plots. Limiting information
most affects weighted betweenness, which unbounded can
take over 0.07 seconds, but bounded may be less than
0.01, close to weighted bounded eigencentrality. Instant
and constant computation times are similar for all mea-
sures, though instant CP shows variability and intermit-
tent spikes during network congestion, where queues grow
due to build-up exceeding processing speed in certain
regions. Periodic CP was uniformly faster, which since it
places a lighter memory load through lower frequency, may
be an artefact of computational stress on the computer
during simulation. Dynamicity increases computation time
for complex measures, but minimally impacts degree cen-
trality, computed nearly instantly. Means for each measure
and CP are shown in Table 3.
(b) Instant (c) Constant (d) Periodic
Fig. 4. Computation time for all CEs for simulations of
each CP, measured in seconds.
Table 3. Mean computation time for all CEs for
simulations of each CP, measured in seconds.
Blue is the fastest, red is the slowest.
Instant Constant Periodic
Betweenness 0.03112 0.03022 0.0245
Bnd’d. Betweenness 0.00676 0.00669 0.00554
Wtd. Betweenness 0.06512 0.06231 0.05248
Wtd. Bnd’d. Betweenness 0.01188 0.01154 0.00951
Eigenvector 0.00738 0.00726 0.00612
Bnd’d. Eigenvector 0.00445 0.0043 0.00357
Wtd. Eigenvector 0.01902 0.01874 0.01523
Wtd. Bnd’d. Eigenvector 0.01084 0.01068 0.00869
Degree 1.30E-05 1.09E-05 5.19E-06
Wtd. Degree 1.83E-05 1.38E-05 6.88E-06
5. CONCLUSION
In this paper, we reviewed network criticality, communi-
cation paradigms, and existing IB-CMs. We introduced a
model of simulated decentralised online network measure-
ment under three CPs. We then defined the IB-CMs we use
in our analysis (actually CEs). The validation method that
determines the error and from there, value of the informa-
tion was then outlined. Simulations for the various CEs
under different CPs and their results were then detailed,
showing the viability of information bounded and dynamic
criticality estimation. Together this research provides a
framework to develop more advanced CEs. Bounding infor-
mation visibility is a viable method for scalable measures
that preserves accuracy while speeding up calculation.
Each CE was shown to have output occupying the same
approximate region between CPs, so behaving similarly.
This holds true for error curves too, as in Figs. 2 and 3. In
fact, error shows to be more constant for the periodic CP,
suggesting an advantage in terms of control for that CP.
Computation time of was found to have similar ordering
among CEs between CPs, decreasing in variance and mag-
nitude from instant to constant to periodic communica-
tion. This may be an artefact of simulating decentralised
agents with a central computer. Alternatively, this may
support the argument for decentralised computation, since
less information frequency improves computation speed in
singular agents. Adding interpolation or statistical data
generation may then give fine detailed, accurate measures
with low data packet load and high responsiveness.
This study simulated normally operating networks. Fur-
ther research will simulate critical node failure. We expect
this will introduce variability in computation times, which
may affect monitoring interval selection. Future work will
use also use the advanced CEs developed in (Proselkov
et al. (2020)), as well as produce case specific, data derived
CEs for maximum relevancy. Using these for network con-
trol functions will then allow us to learn the value of infor-
mation function by removing measure dependency. This
will be achieved through the validation method defined
in this paper measured against a performance metric. We
predict this will give useful findings in studies of network
homophily, and tools for policy makers when constructing
or designing networks with communication.
Beyond the telecom case, this analytic framework will
be applicable to other systems with dynamic flow and
independent cognitive agents, such as business networks,
mail networks, river networks, and more, each a critical
support network for any manufacturing system.
REFERENCES
Birnbaum, Z.W. (1968). On the Importance of Different
Components in a Multicomponent System. Technical
report, Washington University Seattle Lab of Statistical
Research, Seattle.
Cetinkaya, E.K. and Sterbenz, J.P.G. (2013). A Taxonomy
of Network Challenges. In Design of Reliable Commu-
nication Networks. IEEE.
Chen, D., L¨u, L., Shang, M.S., Zhang, Y.C., and Zhou,
T. (2012). Identifying influential nodes in complex
networks. Physica A: Statistical Mechanics and its
Applications, 391(4), 1777–1787.
Dinh, T.N., Xuan, Y., Thai, M.T., Park, E.K., and Znati,
T. (2010). On Approximation of New Optimization
Methods for Assessing Network Vulnerability. In IEEE
INFOCOM 2010 - IEEE Conference on Computer Com-
munications, 1–9. IEEE.
Ercsey-Ravasz, M. and Toroczkai, Z. (2010). Centrality
scaling in large networks. Physical Review Letters,
105(3).
Fang, Y. and Zio, E. (2013). Hierarchical Modeling by
Recursive Unsupervised Spectral Clustering and Net-
work Extended Importance Measures to Analyze the Re-
liability Characteristics of Complex Network Systems.
American Journal of Operations Research, 03(01), 101–
112.
Freeman, L.C. (1977). A Set of Measures of Centrality
Based on Betweenness. Sociometry, 40(1), 35.
Herrera, M., Perez-Hernandez, M., Kumar Jain, A., and
Kumar Parlikad, A. (2020). Critical link analysis of a
national Internet backbone via dynamic perturbation.
In Advanced Maintenance Engineering, Services and
Technologies.
Kazil, J., Masad, D., and Crooks, A. (2020). Utilizing
Python for Agent-Based Modeling: The Mesa Frame-
work, volume 12268 LNCS. Springer International Pub-
lishing.
Kermarrec, A.M., Le Merrer, E., Sericola, B., and Tedan,
G. (2011). Second order centrality: Distributed assess-
ment of nodes criticity in complex networks. Computer
Communications, 34(5), 619–628.
Lehmann, K.A. and Kaufmann, M. (2003). Decentralized
algorithms for evaluating centrality in complex net-
works. Networks, (January 2003), 1–9.
Likic, V. and Shafi, K. (2018). Battlespace Mobile/Ad Hoc
Communication Networks: Performance, Vulnerability
and Resilience. 303–314.
u, L., Chen, D., Ren, X.L., Zhang, Q.M., Zhang, Y.C.,
and Zhou, T. (2016). Vital nodes identification in
complex networks. Physics Reports, 650, 1–63.
Marsden, P.V. (2002). Egocentric and sociocentric mea-
sures of network centrality. Social Networks, 24(4), 407–
422.
Moura, J. and Hutchison, D. (2019). Cyber-Physical
Systems Resilience: State of the Art, Research Issues
and Future Trends. 42.
Nanda, S. and Kotz, D. (2008). Localized Bridging Cen-
trality for Distributed Network Analysis. In 2008 Pro-
ceedings of 17th International Conference on Computer
Communications and Networks, 1–6. IEEE.
Neumann, P. (1995). Fatal Defect: Chasing Killer Com-
puter Bugs, volume 20. Times Books, first edit edition.
Proselkov, Y., Herrera, M., Parlikad, A.K., and Brintrup,
A. (2020). Distributed Dynamic Measures of Criticality
for Telecommunication Networks. In Service Oriented,
Holonic and Multi-agent Manufacturing Systems for
Industry of the Future, 1–12. Springer.
Salazar, J.C., Nejjari, F., Sarrate, R., Weber, P., and
Theilliol, D. (2016). Reliability Importance Measures for
Availability Enhancement in Drinking Water Networks.
Technical report.
Veres, A. and Boda, M. (2005). Complex Dynamics in
Communication Networks. Springer: Complexity.
von Mises, R. and Pollaczek-Geiringer, H. (1929). Prak-
tische Verfahren der Gleichungsau߬osung. Zamm-
zeitschrift Fur Angewandte Mathematik Und Mechanik,
9, 152–164.
Wehmuth, K. and Ziviani, A. (2011). Distributed location
of the critical nodes to network robustness based on
spectral analysis. In LANOMS 2011, 1–8. IEEE.
ResearchGate has not been able to resolve any citations for this publication.
Chapter
Full-text available
Telecommunication networks are designed to route data along fixed pathways, and so have minimal reactivity to emergent loads. To service today’s increased data requirements, networks management must be revolutionised so as to proactively respond to anomalies quickly and efficiently. To equip the network with resilience, a distributed design calls for node agency, so that nodes can predict the emergence of critical data loads leading to disruptions. This is to inform prognostics models and proactive maintenance planning. Proactive maintenance needs KPIs, most importantly probability and impact of failure, estimated by criticality which is the negative impact on connectedness in a network resulting from removing some element. In this paper, we studied criticality in the sense of increased incidence of data congestion caused by a node being unable to process new data packets. We introduce three novel, distributed measures of criticality which can be used to predict the behaviour of dynamic processes occurring on a network. Their performance is compared and tested on a simulated diffusive data transfer network. The results show potential for the distributed dynamic criticality measures to predict the accumulation of data packet loads within a communications network. These measures are predicted to be useful in proactive maintenance and routing for telecommunications, as well as informing businesses of partner criticality in supply networks.
Article
Full-text available
Long-haul backbone communication networks provide internet services across a region or a country. The access to internet at smaller areas and the functioning of other critical infrastructures rely on the long-haul backbone high speed services and resilience. Hence, such networks are key for the decision-making of internet service managers and providers, as well as for the management and control of other critical infrastructures. This paper proposes a critical link analysis of the physical infrastructure of the UK internet backbone network from a dynamic, complex network approach. To this end, perturbation network analyses provide a natural framework to measure the network tolerance facing structural or topological modifications. Furthermore, there have been taken into account variations on data-traffic for the internet backbone that usually happen in a typical day. The novelty of the proposal is, then, twofold: proposing a weighted (traffic informed) Laplacian matrix to compute a perturbation centrality measure, and enhancing it by a time-dependent perturbation analysis to detect changes in link criticality within the network, coming from data traffic variation in a day. The results show which are the most critical links at every time of the day, being of main importance for protection, maintenance and mitigation plans for the UK internet backbone.
Conference Paper
Full-text available
Mesa is an agent-based modeling framework written in Python. Originally started in 2013, it was created to be the go-to tool in for researchers wishing to build agent-based models with Python. Within this paper we present Mesa's design goals, along with its underlying architecture. This includes its core components: 1) the model (Model, Agent, Schedule, and Space), 2) analysis (Data Collector and Batch Runner) and the visualization (Visualization Server and Visualization Browser Page). We then discuss how agent-based models can be created in Mesa. This is followed by a discussion of applications and extensions by other researchers to demonstrate how Mesa design is decoupled and extensible and thus creating the opportunity for a larger decentralized ecosystem of packages that people can share and reuse for their own needs. Finally, the paper concludes with a summary and discussion of future development areas for Mesa.
Article
Full-text available
Many future innovative computing services will use Fog Computing Systems (FCS), integrated with Internet of Things (IoT) resources. These new services, built on the convergence of several distinct technologies, need to fulfil time-sensitive functions, provide variable levels of integration with their environment, and incorporate data storage, computation, communications, sensing, and control. There are, however, significant problems to be solved before such systems can be considered fit for purpose. The high heterogeneity, complexity, and dynamics of these resource-constrained systems bring new challenges to their robust and reliable operation, which implies the need for integral resilience management strategies. This paper surveys the state of the art in the relevant fields, and discusses the research issues and future trends that are emerging. We envisage future applications that have very stringent requirements, notably high-precision latency and synchronization between a large set of flows, where FCSs are key to supporting them. Thus, we hope to provide new insights into the design and management of resilient FCSs that are formed by IoT devices, edge computer servers and wireless sensor networks; these systems can be modelled using Game Theory, and flexibly programmed with the latest software and virtualization platforms.
Conference Paper
Full-text available
This work focuses on a health-aware Model Predictive Control (MPC) scheme, which aims at enhancing the availability of the system. The objective is to extend the uptime of the system by delaying, as much as possible the system reliability decay. The weights of the MPC cost function are set according to some reliability importance measures. This work describes the main reliability importance measures and studies which of them are best suited for a health-aware MPC strategy applied to a Drinking Water Network. The overall system reliability as well as the reliability importance measures are computed online through a Dynamic Bayesian Network.
Article
Full-text available
The complexity of large-scale network systems made of a large number of nonlinearly interconnected components is a restrictive facet for their modeling and analysis. In this paper, we propose a framework of hierarchical modeling of a complex network system, based on a recursive unsupervised spectral clustering method. The hierarchical model serves the purpose of facilitating the management of complexity in the analysis of real-world critical infrastructures. We exemplify this by referring to the reliability analysis of the 380 kV Italian Power Transmission Network (IPTN). In this work of analysis, the classical component Importance Measures (IMs) of reliability theory have been extended to render them compatible and applicable to a complex distributed network system. By utilizing these extended IMs, the reliability properties of the IPTN system can be evaluated in the framework of the hierarchical system model, with the aim of providing risk managers with information on the risk/safety significance of system structures and components.
Chapter
Dynamic self-forming/self-healing communication networks that exchange IP traffic are known as mobile ad hoc networks (MANET). The performance and vulnerabilities in such networks and their dependence on continuously changing network topologies under a range of conditions are not fully understood. In this work, we investigate the relationship between network topologies and performance of a 128-node packet-based network composed of four 32-node communities, by simulating packet exchange between network nodes. In the first approximation, the proposed model may represent a company of soldiers consisting of four platoons, where each soldier is equipped with MANET-participating radio. In this model, every network node is a source of network traffic, a potential destination for network packets, and also performs routing of network packets destined to other nodes. We used the Girvan-Newman benchmark to generate random networks with certain community structures. The interaction strength between the communities was expressed in terms of the relative number of network links. The average packet travel time was used as the proxy for network performance. To simulate a network attack, selected subsets of connections between nodes were disabled, and the performance of the network was observed. As expected, the simulations show that the average packet travel time between communities of users (i.e. between platoons) is more strongly affected by the degree of mixing compared to the average packet travel time within a community of users (i.e. within an individual platoon). While the conditions presented here simulate a relatively mild attack or interference, simulation results indicate significant effects on the average packet travel time between communities.
Article
Real networks exhibit heterogeneous nature with nodes playing far different roles in structure and function. To identify vital nodes is thus very significant, allowing us to control the outbreak of epidemics, to conduct advertisements for e-commercial products, to predict popular scientific publications, and so on. The vital nodes identification attracts increasing attentions from both computer science and physical societies, with algorithms ranging from simply counting the immediate neighbors to complicated machine learning and message passing approaches. In this review, we clarify the concepts and metrics, classify the problems and methods, as well as review the important progresses and describe the state of the art. Furthermore, we provide extensive empirical analyses to compare well-known methods on disparate real networks, and highlight the future directions. In despite of the emphasis on physics-rooted approaches, the unification of the language and comparison with cross-domain methods would trigger interdisciplinary solutions in the near future.
Article
This document is intended to provide background information for offerers responding to BAA 95-40: Evolutionary Design of Complex Software (EDCS). It describes the general problem that the EDCS Program addresses along with some of the characteristics ...
Conference Paper
Communication networks, in particular the Internet, face a wide spectrum of challenges that can disrupt our daily lives. We define challenges as adverse events triggering faults that eventually result in service failures. Understanding these challenges accordingly is essential for the improvement of the current networks and for designing Future Internet architectures. In this paper, we present a taxonomy of network challenges based on past and potential events. Moreover, we describe how the challenges correlate with our taxonomy. We believe that such a taxonomy is valuable for evaluating design choices as well as establishing a common terminology among researchers.