ArticlePDF Available

Abstract and Figures

Nowadays, both the amount of cyberattacks and their sophistication have considerably increased, and their prevention is of concern of most of organizations. Cooperation by means of information sharing is a promising strategy to address this problem, but unfortunately it poses many challenges. Indeed, looking for a win-win environment is not straightforward and organizations are not properly motivated to share information. This work presents a model to analyse the benefits and drawbacks of information sharing among organizations that presents a certain level of dependency. The proposed model applies functional dependency network analysis to emulate attacks propagation and game theory for information sharing management. We present a simulation framework implementing the model that allows for testing different sharing strategies under several network and attack settings. Experiments using simulated environments show how the proposed model provides insights on which conditions and scenarios are beneficial for information sharing.
Content may be subject to copyright.
Shall we collaborate? A model to analyse the benefits of
information sharing
Roberto Garrido-Pelaz
Computer Security Lab
Carlos III University of Madrid
Madrid, Spain
rgarrido@pa.uc3m.es
Lorena
González-Manzano
Computer Security Lab
Carlos III University of Madrid
Madrid, Spain
lgmanzan@inf.uc3m.es
Sergio Pastrana
Computer Security Lab
Carlos III University of Madrid
Madrid, Spain
spastran@inf.uc3m.es
ABSTRACT
Nowadays, both the amount of cyberattacks and their so-
phistication have considerably increased, and their preven-
tion is of concern of most of organizations. Cooperation
by means of information sharing is a promising strategy
to address this problem, but unfortunately it poses many
challenges. Indeed, looking for a win-win environment is
not straightforward and organizations are not properly mo-
tivated to share information. This work presents a model to
analyse the benefits and drawbacks of information sharing
among organizations that presents a certain level of depen-
dency. The proposed model applies functional dependency
network analysis to emulate attacks propagation and game
theory for information sharing management. We present a
simulation framework implementing the model that allows
for testing different sharing strategies under several network
and attack settings. Experiments using simulated environ-
ments show how the proposed model provides insights on
which conditions and scenarios are beneficial for informa-
tion sharing.
CCS Concepts
Networks Network reliability; Security and privacy
Trust frameworks; Computer systems organization
Availability;
Keywords
Cybersecurity; Information sharing; Game theory
1. INTRODUCTION
In the last decade, cyber attacks have considerably in-
creased and nowadays cyber-crime is considered a stable and
growing industry [18, 16, 29]. Cybersecurity prevention,
detection and response is an ongoing challenge that needs
constant and new efforts to protect critical infrastructures,
organizations, enterprises and individual welfare. After an
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full cita-
tion on the first page. Copyrights for components of this work owned by others than
ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re-
publish, to post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from permissions@acm.org.
WISCS ’16 October 24, 2016, Vienna, Austria
c
2016 ACM. ISBN 123-4567-24-567/08/06. . . $15.00
DOI: xx.yyy/zzz d
intrusion or attack has succeeded, it is important to per-
form an incident investigation to determine causes and con-
sequences, and to update the security measures that failed
(e.g. developing new IDS rules, updating blacklists, etc.).
Information gathered in this process is a valuable asset, but
unfortunately it is usually kept secret in the inner bound-
aries of the companies, organizations or even national gov-
ernments.
Cooperation between different parties has emerged as an
essential strategy to improve cybersecurity prevention. Na-
tional and international efforts encourage the application of
cooperation-based solutions to address cybersecurity prob-
lems. For example, the US National Security Presidential
Directive 54 and Homeland Security Presidential Directive
23 [15] was launched in 2008 to get effective cybersecurity
environments. European Parliament has also agreed a Net-
work and Information Security (NIS) Directive [8] that will
enter into force in August 2016. The benefits of coopera-
tion are even higher in case of critical infrastructures, which
highly rely on Information and Communication Technolo-
gies (ICT) that also share services among them. The devel-
opment of reliable and resilience infrastructures should be
viewed as an overall strategy rather than a single, indepen-
dent task. Sharing information related to cybersecurity can
be helpful in this regard.
However, there are many challenges and drawbacks that
discourage organizations to share information [25]. First, it
is critical to look for a win-win environment in which all en-
tities are benefited and where free-riders (i.e. those entities
that benefit from others but do not cooperate) are avoided.
Second, the reputation of targeted entities is an asset to pro-
tect, and one of the main drawbacks of sharing information
is precisely the loss of privacy. Third, it is important to
take into account some form of trust management, where
companies can trust each other to incentive sharing [7].
Information sharing facilitates a common understanding
of threats and thus, it benefits organizations in aspects like
quality of risk management, incident response or recovery
management. However, despite the clear benefits, neither
private nor public organizations are prone to collaborate un-
less there are tangible incentives that motivate them to do
it. Recent works have identified information sharing as a
cost-benefit Prisoner’s dilemma solved using game theory
[30, 7]. However, these proposals focus on the mathematical
analysis of games between two players, without taking into
account the overall network of entities and their functional
dependencies. Moreover, these works do not consider how
arXiv:1607.08774v1 [cs.CR] 29 Jul 2016
cyberattacks affect other entities in the network due to their
propagation.
In this work, we present a model for cybersecurity infor-
mation sharing among dependent organizations being im-
pacted by different cyberattacks. The model allows simulat-
ing real networks and different adversarial capabilities by es-
tablishing different attack patterns, assets and dependencies
between partners. It applies a propagation algorithm to infer
how the entire network is affected by independent cyberat-
tacks, and simulates different sharing strategies. Then, it
outputs analytical results that may help security staff to de-
termine under which circumstances it is interesting to share
or not. The proposed model does not aim at providing the
ground truth about sharing or not, but it helps organiza-
tions and governments to take this decision by simulation.
In this work we present the following contributions:
1. We describe a model that considers the propagation of
the impact of cyberattacks on a network and that ap-
plies different strategies for information sharing to mit-
igate such impact. The model applies Functional De-
pendency Network Analysis (FDNA) for attacks prop-
agation and game theory for information sharing man-
agement.
2. We have developed a publicly available simulation frame-
work that implements the model. This framework al-
lows to simulate and study test cases by analysing re-
sults from both network point of view and particular
nodes.
3. Using the simulation framework, we have applied the
model in different scenarios having different adversar-
ial settings. Our experimental work shows how the
model can provide knowledge about which sharing strate-
gies are better under different network conditions and
attack patterns.
The rest of the paper is organized as follows. Section 2
reviews the literature. Section 3 presents some background
on functional dependency network analysis and game theory.
Then the model overview and description are presented in
Section 4, and Section 5 details the implemented framework.
Finally, Section 6 presents the conclusions and ongoing work.
2. RELATED WORK
Several works have proposed the use of game theory to
analyse the trade-off in terms of incentives and costs of shar-
ing information among entities. Naghizadeh et Liu [23] pro-
pose folk theorems and use an analytical method to study
how the role of private and public monitoring through inter-
temporal incentives can support degree of cooperation. Sim-
ilar to this work, we also propose a game theory based model
where utilities of sharing information are calculated upon
gains and costs. However, instead of using historical pub-
lic available actions, we propose immunization factor and
reputation as the main variables for incentive sharing. Fur-
thermore, the work in [23] identifies the cost of disclosure as
one of the main drawbacks related with information sharing.
While we also consider privacy and disclosure costs as two
key drawbacks for sharing, we also introduce a third variable
that affects costs, i.e. trust.
Tosh et al. [30] use game theory to help organizations to
decide whether to share information or not, using the CY-
BEX framework [26]. Authors use evolutionary game the-
ory in order to attain an evolutionary stable strategy (ESS)
under various conditions. These conditions are extracted
through simulation with synthetic data in a non-cooperative
scenario with rational and profit-seeking firms. The main in-
centive for sharing is the information received, and thus the
knowledge gained. In our work we also consider this knowl-
edge as an incentive for sharing.
Khouzani et al. [19] present a two stage Bayesian game
between two firms to help to decide how much to invest in
searching vulnerabilities and how much of this information
to share. Authors determine the Perfect Bayesian Equilib-
rium to extract analytically strategy conditions encouraging
information sharing. In [19] a firm benefits from losses in
another, namely due to exploited bugs. Moreover, they dis-
tinguished costs between: direct loss (of compromised firm),
common loss by market shrinkage and competitive loss.
A. K. Eric Luiijf. [7] has analysed the problem of free-
riders, i.e. those entities that benefit from the shared infor-
mation but do not cooperate. To minimize free-riding they
propose two approaches: a) provide a quantitative analy-
sis and show the benefits of reciprocity to incentive sharing;
b) enforce sharing environments by means of regulations,
similar to other works [20]. In our work, we consider that
sharing information may not be always effective, and thus
we adopt the first approach, i.e. to study cases in which
information sharing benefits the overall network and where
it only benefits some of the partners.
One of the main problems when analysing the costs and
benefits of information sharing is the experimentation with
real data. Whereas most of the proposed works [23][30][20]
perform evaluation using analytical methods, Freudiger et
al. [9] present a controlled data sharing approach and make
empirical evaluation using a dataset of suspicious IP ad-
dresses. Authors in [9] use different similarity metrics to
analyse benefits of sharing and compare different sharing
strategies: sharing everything or only information about at-
tack entities. Authors in [9] rely on a static scenario and
provide useful metrics to mathematically predict benefits of
info sharing. By contrast, using a simulated setting we em-
pirically analyse how impacts are propagated through the
network, at runtime, to afterwards analyse how information
sharing is able to mitigate such impacts in the future.
3. BACKGROUND
This section describes Functional Dependency Network
Analysis (FDNA) and game theory, as they are two core
disciplines applied in the proposed model.
3.1 Functional Dependency Network Analysis
Functional Dependency Network Analysis (FDNA) was
proposed first by Garvey et al. [11]. They proposed a
methodology to assess the impacts derived from loss of sup-
ply in one provider to function operability in dependant ser-
vices. Authors propose two metrics to quantify the depen-
dencies between nodes: the Strength of Dependency and
the Criticality of Dependency. Using these metrics, several
works have analysed vulnerabilities, impacts and risks in
system-of-systems scenarios [24, 5, 13].
Few works analyse the benefits of information sharing in-
cluding dependences among players, as well as the propaga-
tion of impacts of cyberattacks. Laube et al. [20] refer to de-
pendencies and impact of cyberattacks through direct costs
(security breaches happened). By contrast, Hernandez et
al. [14] use relationships among nodes inside a sharing com-
munity, focusing on knowledge flows and how they increase
information value of nodes. We consider that dependencies
between partners or entities is a key concept in order to de-
cide whether to share information or not, and thus the pro-
posed model uses functional dependencies. The main goal is
to analyse the impact of cyberattacks through services pro-
vided in a hypothetical network and how information shar-
ing can contribute to threat mitigation over time.
3.2 Game theory background
Game theory relies on four elements to define a game:
players, rules or possible actions, information structure and
game objective. In a sharing game rules or actions are iden-
tified with sharing pure strategies share, not share. Games
are usually represented in normal form as a pay-off matrix.
Pay-offs are numbers representing the outcome, through a
measure of quantity or utility that a player gets as a result
of playing specific actions [27]. Pay-offs are the mechanism
to reflect the motivations to select pure strategies. Values in
a pay-off matrix can be given by constant values or by for-
mulas and are closely related to the goal of the game, thus
maximizing or minimizing gained pay-offs.
There are several approaches that develop a game-theory
based solution. They go from classical game theory, focused
on the analysis of equilibrium among players pay-offs; to
evolutionary game theory, focused on the dynamics of strat-
egy changes (in populations). New game-theory approaches
are learning game theory [17], where players learn over time
based on past decisions of other players, and behavioural
game theory [4], based on psychological elements to describe
human behaviour.
Regarding to classification of games, information shar-
ing decisions fit well with the prisoner’s dilemma [7], where
cooperative behaviours are not very clear as players have
to find a trade-off between benefits and costs of their ac-
tions. Moreover, games can be of perfect/imperfect and
complete/incomplete information. Perfect information refers
to the fact that each player, when making any decision, is
informed of all the events that have previously occurred.
Complete information refers to the fact that each player has
knowledge about the pay-offs and strategies available to the
remaining players.
4. THE MODEL
This section presents the proposed model. Section 4.1
presents an overview. Section 4.2 describes the model. Sec-
tions 4.3, 4.4 and 4.5 present respectively how cyberattacks
propagation is performed, information is shared and decision
variables are updated.
4.1 Model overview
Organizations and their assets are represented as networks
composed of elements called to nodes. Information/assets
owned by nodes have a value. This value represents the in-
formation loss (e.g. economical, reputation, resources, etc.)
due to the impact of a cyberattack. Accordingly, we consider
this value as a combination of Confidentiality, Integrity and
Availability principles (CIA) since these properties are at
the core of information security.
The model considers different periods of time (epochs)
concerning the emergence of a cyberattack. Fig. 1 presents
Figure 1: Model overview.
an overview of the model in which a pair of stages are dis-
tinguished: 1) propagation of the attack, and 2) information
sharing. Stage 1 shows that when a node is targeted by an
attack, its CIA value decrease proportionally to the impact
of the cyberattack. As a consequence of an attack, depen-
dant nodes on the targeted node are also affected according
to service levels agreed among nodes. This process is what
we call propagation of cyberattacks.
Upon receiving an attack, the targeted node is able to
develop countermeasures (e.g. due to an incident investiga-
tion), thus being immunized for future attacks. Then, at-
tacked nodes must decide whether to share the information
about attacks (and the associated countermeasures) with
other nodes.
Information sharing steps are represented in the Stage 2
of Fig. 1. The sharing decision is based on several vari-
ables and environmental conditions. On the one hand, if
one node decides to share information, it will incur a cost
which is related to the risk of unwanted disclosure of shared
information and privacy loss. Thus, this cost directly affects
CIA value of the node with which information is shared.
On the other hand, sharing may provide two main benefits:
(1) to raise cybersecurity awareness level through immuniza-
tion factor, (2) to improve enterprise reputation. In general,
sharing decisions are based on several criteria and have many
associated variables: with whom to share, cost of sharing,
dependencies, trust and reputation among nodes, organi-
zation policy, information properties, level of resilience and
knowledge acquired in sharing processes, among others. The
problem is formulated as a trade-off between costs and ben-
efits of sharing information. As we show in Section 5, sim-
ulations with this model can be used to analyse the scenar-
ios where different sharing strategies are better, regarding
different conditions and variables like the presence of free-
riders.
4.2 Model description
Let Ω = (A, V, C , Y, I, S, T , R, W ) be the set of variables
representing the state of the model at each epoch. Sharing
decisions and results obtained at each epoch are based on
the values of variables of Ω. In the following we describe
these variables.
Services and dependencies of the network (A)
The model considers a network composed by nodes repre-
senting some ICT infrastructure. Each node encapsulates a
relevant element or service having functional dependencies
with others (elements or services) inside/ outside the orga-
nization networks. A link between two nodes represents the
dependency level between those nodes. The network can
be viewed as a labelled graph, represented as an adjacency
matrix Aof size n×n, being nthe number of nodes . An
element ai,j in matrix Arepresents the level of service that
node ioffers to node j, or the degree of dependency of node
jon node i. Values of elements range from 0 to 1, where 0
means no service/dependency and 1 means as total depen-
dency for one node to other.
CIA value of the nodes (V)Each node has a CIA value
which bases on the three traditional information security
properties: confidentiality (C), integrity (I) and availability
(A). Vis a vector of size nwhere each element viis the value
of the node iinside the network. The values range from 0
to 1, where 0 means that the node has no value (i.e. useless
asset) and 1 means that the node is an important asset for
an entity. While this is an interesting and complex area
on risk analysis and asset management, establishing asset
values is out of the scope of this work.
Costs of information sharing (C)
When an entity or organization shares information, it may
incur a cost. Costs of sharing are described by a vector Cof
size n, where each element ciis the cost of sharing for node
i. Similarly to giving values to assets, quantifying costs is
not straightforward and it depends on the scenario of appli-
cation. The cost of information sharing has a direct effect
on the CIA value of nodes (e.g. due to loss of confidentiality
or privacy). To simplify the model, in our current prototype
these costs are calculated using a parameter k[0,1] that
represents the percentage of loss in the CIA value of the
node due to sharing information. Thus, the cost of sharing
is directly proportional to the CIA value of the nodes. Sum-
marizing, values in the vector Care calculated as ci=k·vi,
where viis the CIA value of node iand ciits cost of sharing.
Set of possible cyberattacks (Y)
A cyberattack is any attempt to compromise the confi-
dentiality, integrity or availability of an asset (node) in an
organization. The caused damage can be viewed as an eco-
nomic impact, but in our case, the impact decreases CIA
value of the targeted node. Each cyberattack has specific
properties and the organization should implement the cor-
rect countermeasures to solve problems as soon as possible.
The set of cyberattacks is described by a vector Yof size n,
where each element Yiis the cyberattack received by node
iin a given epoch. The model manages mpossible cyber-
attaks1, and it is assumed that cyberattacks have a default
impact associated, that we note as D. The values of mand
Dcan be modified during experimentation to simulate dif-
ferent attacks scenarios.
1We set a limited number of cyberattacks in our current
prototype implementation, but these cyberattack may be
viewed as a zero day if none of the nodes are immunized
against them
Attacks propagation causes a pair of different damages.
First, the targeted node suffers a direct impact which can be
mitigated if the node was prepared for it (immunized), thus
it depends on the immunization factor of this node for this
attack, as it is explained below. Second, a related impact
is applied on nodes that directly or indirectly depend on
the targeted node. In this case, the CIA value is decreased
according to the degree of dependency between the nodes
and the direct impact of the targeted node.
Immunization factors (I)
An immunization factor indicates how well a node is pre-
pared against a cyberattack, and thus, how it can mitigate
its impact. It represents any mechanism or countermea-
sure such as knowledge about this attack (IDS rules, IoCs,
blacklisting IPs, etc.), a piece of software (antivirus, SIEM),
organization policies, etc. The immunization factor is spe-
cific for each attack and each node, and it is represented by
a matrix Iof size m×n, where mis the number of avail-
able cyberattacks and nis the number of nodes. An element
Ip,i [0,1] represents the degree (factor) by which node iis
immunized for cyberattack p. Accordingly, a value Ip,i = 0
means that node iis not immunized for cyberattack p, while
a value Ip,i = 1 indicates that node iis totally immunized
for cyberattack pand it will have no impact on CIA value
of i.
An important point is the difference between the direct
and related attack impact, as well as the relationship with
their immunization factor. Once a node receives a cyber-
attack, it impacts its CIA value, only mitigated according
to the immunization factor of such node. However, if this
node is not immunized against the attack, the correspond-
ing impact will be propagated across the network based on
dependences, no matter the immunization of other nodes
against this cyberattack. As a result, sharing information
becomes essential if a node does not want to be affected by
cyberattacks suffered by other nodes.
Sharing policy (S)
Sharing policies determine nodes behaviour to decide with
whom to share or not to share. The decision bases on the
properties of each node, network conditions and variables in
Ω. Each node chooses a fixed sharing policy along the whole
game to select a pure strategy, namely, share or not share.
These policies are described by S.Sis a vector where Siis
the sharing policy of the node i. Examples of sharing policies
are ”Share only with those on which I directly depend” or
Share with those nodes that provide more services.
Trust among nodes (T)
In the cooperative cyber defense scenario, trust is a key
aspect, since it is used to define collaborative security models
[22]. Trust represents how nodes trust each other [10] and
it is described by a matrix Tof size n×n. An element
Ti,j [0,1] is the trust value that node ihas on node j.
Ti,j = 0 means node ihas no trust in node jand Ti,j = 1
means node ihas full trust in node j. Trust between nodes
increments or decrements the cost of sharing, i.e. sharing
with those nodes that are more trustful is less costly.
Reputation among nodes (R)
Trust and reputation are related but different concepts
[10]. On the one hand, trust is the subjective probability
that an agent will carry out a specific task as expected. On
the other hand, reputation represents the set of past opinions
received by others users, that is, expectancy of behaviour
based on past interactions [1]. Our approach represent rep-
utation through a matrix Rof size n×nwhere an element
Ri,j [0,1] is the reputation value that node iobtain for
node j.Ri,j = 0 means node ireceive no reputation from
node jand Ri,j = 1 means node ireceive full reputation
from node j.
Awareness (W)
Awareness represents the degree of useful information re-
ceived by a particular node, i.e. information which was
unknown by the receiver and thus it increases the general
awareness of the node at each epoch. We represent aware-
ness through a matrix Wof size n×nwhere an element
Wi,j 0,1 is the awareness that node iobtains for node
j.Wi,j = 0 means node ireceives no valuable information
from node jand Ri,j = 1 means node ireceives valuable
information from node j. In this first approach we closely
relate the awareness with immunization factors, since node
igets Wi,j = 1 from node jonly if igets new immuniza-
tion factors for some cyberattack pnot yet immunized, thus
Ip,i = 1 for cyberattack preceived by node j.
Algorithm 1 shows the sequence of actions executed in the
model. As input, it receives an initial state of the system
represented by Ω as explained above, and as output, it pro-
vides a set of metrics and features regarding the final state
and the sequence of actions from the simulation (e.g. which
nodes have shared information, which nodes have been tar-
geted by cyberattacks, etc.). These metrics are used to per-
form an analysis both in the network and in particular nodes.
It is important to note in line 3 of Algorithm 1 we generate
the attack vector Y, establishing the nodes that are targeted
at each epoch, and which attacks inside the set of attacks
are used to impact the nodes according to default attack
impacts in D.
Algorithm 1 Cyber-attack Propagation and Information
Sharing Simulation Model
1: procedure CyberModel(Ω =
(A, V, C, Y , I, S, T , R, W ))
2: for t:= 1 M AX EP O CH do
3: YSet Attack Vector
4: Calculate impact in nodes according to Yand D
5: I1 Set Immunization Factor = 1
6: Propagate impacts through the network
7: Set Sharing strategies
8: Play Sharing Game
9: Update CIA values according to sharing policies
10: Update Reputation regarding sharing decisions
11: end for
12: end procedure
4.3 Propagating cyber-attacks impacts
In the first step of the model, once a node is under attack,
dependant nodes are somehow affected. The propagation of
these impacts is the first process computed in each epoch
(from line 4 in Algorithm1).
On one hand, the direct impact on targeted node decreases
the value of this node according to Eq. 1,
iV, V t
i=Vt1
i− k Vt1
i·(DYt
i·(1 IYt
i,i)) k(1)
Here, Yt
iis the cyberattack received by node iin epoch t,
and DYt
iis the default impact of such an attack according
to initialized values as described in previous section. IYt
i,i
is the immunization factor of node ifor the cyberattack Yt
i.
Note that if IYt
i,i is 0, then node iis fully impacted by all
DYt
i, and if IYt
i,i is 1, then node iis not impacted at all.
Without loss of generality, in the current implementation
of the model nodes that are directly attacked get an immu-
nization factor of 1 for that specific attack (thus IYt
i,i = 1).
This represents a total immunization for future attacks of
the same type, due to a perfect incident response procedure
(we discuss this assumption in Section 4.6).
On the other hand, the impact of the cyberattack is prop-
agated across the network (line 6 in algorithm 1). Propaga-
tion is carried out on both direct and indirect dependently
nodes. Direct dependent nodes are well represented in de-
pendency network matrix A, while indirect dependent nodes
are more difficult to get. To address this issue, we previously
calculate indirect services matrix B, also using a Deep First
Search algorithm on A. With this indirect service matrix we
can calculate impact of cyberattacks and the new CIA value
for each node in the network, as showed in Eq. 2:
jV, V t
j=Vt1
j− k Vt1
j·(DYt
i·(1IYt
i,i)) ·(Bi,j )k(2)
Here, iis the node directly attacked, and jis the depen-
dant node impacted. Bi,j is the service weight that node
ioffers direct or indirectly to node j. As described before,
immunization factor does not reduce the impact of the prop-
agated effect of one attack, that is, if one node jreceives an
impact derived from an attack on other node i, the effect on
node jonly depends on the effect on node iand the service
weight defined in Bi,j.
4.4 Information sharing game
The second step of the model is to decide if nodes under
attack share information or not. In this step, identified by
lines 7 and 8 of in algorithm 1, sharing strategies are es-
tablished according to sharing policies and apply them to
calculate pay-offs (benefits and costs) at each epoch.
4.4.1 Game definition
Players are the nodes of the network and the set of pure
strategies is {Share,Not Share}. According to the number
of players, our approach proposes a multiple two-players
(pairs) game. Thus, we define an iterative 2-nodes multiple
games, with n·(n1) different games played with differ-
ent results at each epoch. Moreover, the proposed game is
dynamic due to the fact that participants play stages of the
same game over time. So we propose an instance of the It-
erated Prisoner’s Dilemma as decisions can vary over time
and players have to decide their strategies without commu-
nicating them to others under free-riding conditions.
Non-cooperative and inefficient games can lead to effi-
cient trade-off through repetition [23], but it is paramount
to know if games are under perfect or imperfect information,
and complete or incomplete information conditions, as well.
Our approach follows an imperfect and complete informa-
tion game, since every node knows about pay-offs configu-
ration and strategies available to other players, but they do
not know every action performed by the others (imperfect
information).
Additionally, shown in Table 1, this is a non-zero sum
game, while pay-off gained by one node is not exactly what
the other losses. Furthermore, the game is asymmetric since
two players can get different pay-offs even when applying the
same strategy. This is because pay-offs depend on particu-
lar trust, information properties and the degree of depen-
dency among nodes. Even though the prisoner’s dilemma
take symmetric configuration, it can take an asymmetric
form as presented in [31], and it is important to note that
some works [2] point out that asymmetry reduces coopera-
tion rates in prisoner’s dilemma games.
4.4.2 Sharing strategy
Selecting the sharing strategy for each node is one of
the most important decisions to take in cooperative sce-
narios.We distinguish between sharing strategy and shar-
ing policy concepts. Sharing strategies refer to share or not
to share, that is the pure strategies. On the other hand,
sharing policies correspond to strategies” that players use
to decide what pure strategy to play. Sharing policies can
be supported from basic to very complex decision processes.
Classical and most common sharing policies applied to Iter-
ative Prisoner’s dilemma for behavioural analysis of players
are [21]: AIIC (Always Cooperate); AIID (Defects on every
move); RAND (Random player); TIT (Tit for Tat), that is,
cooperate on the first move, then copies the opponent’s last
move; and Grim (Grim Trigger), cooperates, until the oppo-
nent defects, thereafter always defect. Sharing policies are
particular for each scenario, as we will see in Section 5.
4.4.3 Calculating pay-offs
Pay-off functions depend on reputation, awareness, trust
and cost of sharing sensible information with others. Pay-off
matrix is shown in Table 1, where UAand UBare respec-
tively the pay-offs (utilities) of Aand Bobtained for each
of four possible combination of players’ actions. There are
mainly two factors in each equation, first, values allocat-
ing benefits, and secondly, values allocating costs according
to the chosen strategy. In general terms, it indicates what
players win and what players loss. Regarding the notation,
RaBis the reputation that node Areceives from B,TAB
represents trust from node Ato node Band CAis the cost
of sharing for node A. These values are extracted from vari-
ables defined in Section 4.2.
WABrepresents the awareness degree gained by node A
from node B. It is calculated as WAB= 1 (Bshares
with A)(Bis under attack) (Ais not immunized for
the attack received by B). That is, node Agets immunized
to a given attack which it was not prepared for, due to in-
formation received from node B.
Also note how trust directly affects the cost of sharing.
CA
TABmeans that the higher the trust from Ato B, the
lower cost of sharing from Bto Aand vice versa.
4.5 Updating decision variables
Once attacks have been propagated and the information
has been (or not) shared, the final step is to update CIA
value and reputation of all nodes.
In addition to the decrease of CIA value due to attack
impacts, sharing information has an effect on the CIA value
as well, that is, the cost of information sharing as shown in
Eq. 3. This reduction due to information sharing is only
applied when a node has been targeted by a cyberattack, it
is not previously immunized, it has shared information with
any other node and the mean pay-off obtained by sharing
information is less than 0.
iV, V t
i=Vt1
iCi(3)
Also, reputation obtained for each node is updated at each
epoch. As algorithm 2 shows, the process for updating rep-
utation depends on previous reputation scores, sharing ac-
tions, awareness obtained and whether nodes are under at-
tack or not. In order to prevent free-riding behaviours, we
identify three cases, formalized in lines 4 to 11 in Algorithm
2: (1) if node jincrease its awareness by information shared
by node i, that is, jgets immunized by i, then jrewards the
ireputation score by a constant Kreward; (2) if node jdoes
not increase its own awareness by ibut iis determined to
share, then it is difficult to know if the node iis a free-rider
or the information provided is not useful. In this case, we
propose to do not modify previous reputation value; (3) if
node jdoes not increase its own awareness by iand iis not
determined to share (free-riding behaviour), then reputation
score of idecreases by punishing with Kpunish.
Algorithm 2 Updating reputation scores procedure
1: for n:= 1 ndo
2: for j:= 1 ndo
3: if j6=nthen
4: if (Awareness(j, n)>0) then
5: Rt
nj=Rt1
nj+ (Kreward ·Rt1
nj)
6: else
7: if SharingStrategy(n,j)=Share then
8: Rt
nj=Rt1
nj
9: else
10: Rt
nj=Rt1
nj(Kpunish ·Rt1
nj)
11: end if
12: end if
13: end if
14: end for
15: end for
4.6 Assumptions
Due to limitations on simulation, during the definition of
our model, we have made some assumptions that may be
subject of discussion in real settings. First, value of nodes
always decrease over time due to the impact of cyberattacks
and the costs of information sharing. However, we do not
consider mechanisms that may increase this value, such as
contingency plans or asset restoration. Secondly, when a
node has been targeted by an attack, we assume that it per-
forms a proper incident response investigation and thus it
implements countermeasures, gaining an immunization fac-
tor of 1. In this way, from this moment on the node will be
immunized to such cyberattack.
5. EXPERIMENTATION
This section presents a simulation framework based on the
model described in Section 4, whose prototype have been
implemented in Octave[6]. Using this prototype, a few case
studies are analysed by means of the metrics provided by
the simulation. First, goals and general conditions of the
simulation framework are introduced in Section 5.1. Sec-
ond, Section 5.2 presents the analysed scenarios and specific
settings. In Section 5.3 we describe used metrics and present
Table 1: Pay-off matrix for information sharing game
B
Share Not Share
AShare UA: (RAB+WAB)(CA
TAB)
UB: (RBA+WBA)(CB
TBA)
UA:RAB(CA
TAB)
UB: (CB+WBA)RBA
Not Share UA: (CA+WAB)RAB,
UB:RBA(CB
TBA)
UA:CA(RAB+WAB),
UB:CB(RBA+WBA)
0.0
0.2
0.4
0.6
0.8
0 50 100 150 200
Epoch
MeanCIA
Attack %5 Random
0.0
0.2
0.4
0.6
0.8
0 50 100 150 200
Epoch
MeanCIA
Attack Probab. High Dep.
0.0
0.2
0.4
0.6
0.8
0 50 100 150 200
Epoch
MeanCIA
Attack Probab. High Serv.
Not sharing Sharing 10% deps Sharing all deps Sharing all nodes
Figure 2: Evolution of mean CIA value of nodes for each attack and sharing scenario
analysis results for both general welfare and specific nodes
viewpoints.
5.1 Simulation framework
Empirical evaluation in real large-scale cybersecurity in-
formation sharing environments is hard to carry out. This is
due to the highly diverse and dynamic conditions of the sce-
narios [28]. Thus, we have implemented a simulation frame-
work using GNU Octave [6] based on the proposed model.
This simulation framework allows to simulate specific cy-
berattacks and information sharing scenarios, as well as the
study of the behaviour and evolution of nodes (assets) over
time. The main goal is to show how the model can help in
decision-making problems by analysing test cases.
We aim to build a flexible framework with configurable
settings, namely network size and topology, attack scenarios,
sharing policies and initial trust, reputation and CIA.
To foster research on information sharing, we provide a
open-source version of the prototype in our github reposi-
tory2
5.2 Experimental set-up
2https://github.com/rguseg/infosh-framework
We carry out a series of experiments to analyse specific
network topology behaviours over time based on Monte Carlo
simulations method. We aim to compare network and nodes
evolution according to different sharing policies in three dif-
ferent adversarial models. Hereafter we describe our exper-
imental set-up. It is important to note that the main goal
of our experimental work is to show up the benefits of us-
ing the proposed model and how it can be used in different
case studies. Established settings may not represent real
scenarios, but they aim to simulate general idiosyncrasies of
current networks.
Network sampling. We set a random scale-free di-
rected network composed by 50 nodes. We consider that 50
nodes represent a medium-sized sharing community. Also,
we choose a scale-free network since it has a similar topol-
ogy than the Internet [3]. Note that in scale-free networks
some nodes are highly connected, while most of the nodes
have low connectivity. In the proposed network one node
has an input degree of 14, and most of them have 2 or 3.
Output degrees have similar properties. Regarding weight
of dependencies, we simulate a network where dependencies
values are fixed to 0.5 among every connected node, so each
node is equally dependent of its neighbours.
Attack scenario It presents the degree of threat to which
a sharing community is exposed, and it is determined by
the different cyberattacks, their impact, their frequency and
which nodes are targeted. We randomly chose attacks from
a predefined catalogue with an associated impact.
In particular, we assume the existence of a catalogue com-
posed of 10 attacks (m= 10) to give the simulation enough
variability. The impact of each cyberattack follows a normal
distribution with mean 0.4 and standard deviation of 0.2. It
means that each cyberattacks has an impact between 0.2
and 0.6 on the targeted node.
At each epoch, 30% of the attacks from the catalogue are
randomly selected. Then, a subset (vector) of nodes Ybe-
come the targets. Selection of targeted nodes depend on
many specific conditions, and it may be variable in real set-
tings (e.g. the knowledge of the adversary about the network
topology or the available vulnerabilities to exploit). In the
experimentation we use the following criteria:
Random selection. We randomly select 5% of nodes of
the network to be attacked.
Attack to those with higher dependencies. We first
calculate the number of inputs degrees for each node
and sort them in descendent order. For each node,
we estimate the probability of being attacked as p=
DegreeI nput
n, where nis the number of nodes in the
network. Consequently, the higher the weight of in-
puts (dependencies), the higher the likelihood of being
targeted.
Attack to those providing highest number of services.
It applies the same procedure as for dependencies, but
taking nodes outputs degrees instead of inputs degrees.
The probability of being attacked is estimated as p=
DegreeO utput
n.
When the degree of a node is 0, it means that it has no
dependencies or services to offer. Still yet, they may be
target of cyberattacks, so in these cases we set a default
(minimum) probability of being targeted equal to 2 % (p=
0.02).
Sharing policies. We set-up four available sharing poli-
cies or each simulation as presented in Section 4.4. Since our
goal is to show how the model helps in deciding which shar-
ing strategies are better (independently of local policies), in
our experimental work sharing policies are static and global,
i.e. they do not change over time and are the same for all
the nodes. These strategies are: a) no-one shares; b) nodes
share with 10% of nodes who they most depends on (weight
of dependencies); c) nodes share with all the nodes they
depends on (100%); d) nodes share with all nodes in the
network (broadcast).
Initialization of values. CIA value of nodes is set to
0.8 in a range [0,1]. Setting CIA values to 1 would mean
that a node is the most valuable asset for an organization,
but we do not aim to simulate most critical assets in our
experiments. Given the difficulties of establishing trust val-
ues [12], initial trust values are set to 0.5, being in the range
[0,1]. This decision represents a neutral position which re-
mains static over time.
Other Parameters. For each particular scenario, we run
30 simulations of 200 epochs. The percentage of loss in CIA
value kof each node due to information sharing (i.e. the
cost) is set to 0.2 and the level of punishment (Kpunish) and
reward (Kreward ) are set to 0.3 respectively.
5.3 Results
We focused the analysis in two areas. First, we focus on
the evolution of the general welfare of the network during
time, due to the application of different sharing policies.
Second, we aim at analysing and identifying particular nodes
that have different benefits under similar conditions, so as to
extract useful information from them. To this end, we define
three metrics, named MeanCIA,Gain and Information
Quality, and then we use these metrics to carry out the
analyses.
Definition 1. ¯
Vtis the MeanCIA metric that repre-
sents the general welfare of the network in epoch t. It is
calculated as the average value of CIA of all nodes and all
simulations.
The MeanCIA is calculated as the average value of CIA
of all nodes and simulations, as showed in Eq. 4.
¯
Vt=1
sims ·
sims
X
s=1
(1
n·
n
X
i=1
Vs,t
i) (4)
where tis the epoch which we want to process, sims is the
number of running simulations, nis the number of network
nodes, then Vs,t
iis the CIA value of node iin simulation s
and epoch t.
Definition 2. Gt
iis the Gain metric that indicates the
degree of CIA value gained or lost by each node because of
information sharing. It is calculated as the difference in the
CIA values of each node after applying two policies (sharing
with all the nodes and not sharing) in two similar scenarios
with the same conditions.
The Gain is calculated as showed in Eq. 5.
Gt
i=Vsh2,t
iVsh1,t
i(5)
where tis the epoch which we want to process, iis the
node, Vsh1
iis the CIA value of node iwhile applying the
sharing policy sh1 (not sharing), and Vsh2
iis the CIA value
of node iwhile applying the sharing policy sh2 (sharing with
all the nodes).
Definition 3. Qis the Information Quality metric
that represents the amount of information that is useful to
increase the awareness of the nodes. Concretely, Qin
iis the
amount of quality information sent and Qout
iis the amount
of quality of information received by node i.
Based on these metrics, we next present the results ob-
tained and the analysis performed on the three attack sce-
narios and applying the different sharing policies.
5.3.1 Analysis of the general welfare of the network
In this section, we provide an analysis of welfare evolution
over time for the four sharing policies applied in each of the
three attack scenarios as described in Section 5.2, thus a
total of 12 different scenarios.
Fig. 2 shows time evolution of MeanCIA during 200
epochs for the four sharing policies. It can be observed that
at the beginning of the simulation, all sharing policies have
rather similar MeanCIA, but it decreases faster when ap-
plying policies that do not share or perform selective sharing.
Besides, applying sharing policies based on dependences do
not produce too much benefits. Concretely, sharing with
10% of dependent nodes only improves not sharing policies
in a ratio of 5%, 6% and 2% on the three attack scenar-
ios respectively, while sharing with all the dependent nodes
only improves 10%, 7% and 3%. Sharing with all nodes in
the networks gives improvements rates over not sharing of
64%, 33% and 32%. Thus, it can be concluded that only
the policy of sharing with all nodes is substantially benefi-
cial for the general welfare of the network in the terms and
conditions established during our experimentation.
Regarding the effects of different attack scenarios on the
MeanCIA, we can observe that, in general, “Attack 5%
Random” has higher impacts than “Attack Probab High
Deps” and “Attack Probab High Servs.”, no matter what
the information sharing policy is being applied. Thus, from
the adversarial point of view, it is better to attack randomly
if she knows that no information sharing or sharing only
with dependent nodes, is being applied.
5.3.2 Analysis and identification of critical nodes
In this section, we provide an analysis to detect nodes that
are more important to the community in terms of the In-
formation Quality as presented above. Thus, the amount
of information shared to others that actually increase their
awareness, i.e. this information was not known previously in
the community. We analyse these relevant nodes according
to the Gain metric defined above.
First, we calculate Gt
ifor every node iin the network and
simulation, focusing only in the last epoch t= 200. We
only compare sharing scenarios with more relevant differ-
ences found in Fig. 2, that is, scenarios applying sharing
policies a) and d) as defined en Section 5.2. This means sh1
and sh2 described in Eq. 5 are not sharing and sharing with
all nodes, respectively. Nodes that benefit from scenario sh2
respect to scenario sh1 are those with G200
i>0. Nodes that
do not benefit from scenario sh2 respect to scenario sh1 are
those with G200
i0.
Second, we conduct the analysis in terms on how many
pieces of quality information are sent Qout
iand received Qin
i
by each node. Fig. 3 shows the distribution of nodes in
terms of how many pieces of quality information is provided
(y-axis = Qout
i) and received (x-axis = Qin
i). Red triangles
represent those nodes that do not obtain benefits because of
information sharing (G200
i0), and blue circles represent
the opposite (G200
i>0). In general, it can be observed that
nodes that do not obtain any benefit usually provide more
pieces of quality information than they receive. This may oc-
cur because they receive attacks that are new to the network
(e.g. zero day exploits) and afterwards they notify (and im-
munize) the remainder nodes about them, but then they do
not receive information regarding other attacks. While this
obviously depends on the specific settings and the attack
scenario, the analysis provided with this simulation frame-
work allows the identification of nodes whose information
is relevant for the health of the entire network. Moreover,
since these nodes may not benefit from the sharing commu-
nity, in real settings they could be rewarded by other means
(e.g. providing extra economical benefits) to incentive their
cooperativeness.
0
200
400
600
0 200 400 600
Quality Info In
Quality Info Out
Attack %5 Random
0
200
400
600
0 200 400 600
Quality Info In
Quality Info Out
Attack Probab. High Dep.
0
200
400
600
0 200 400 600
Quality Info In
Quality Info Out
Attack Probab. High Serv.
Nodes with gain
Nodes with no gain
Figure 3: Distribution of nodes that gain (red cir-
cles) and lose out (blue triangles) because of infor-
mation sharing, in terms of number of pieces of qual-
ity information shared and received
6. CONCLUSIONS
Attack prevention and detection are essential task in cy-
bersecurity management. Organizations suffer multiple cy-
berattacks and information sharing can help to develop early
prevention mechanisms. However, organizations are not will-
ing to share information unless incentives are achieved. In
this regard, this paper presents a model for cybersecurity
information sharing among dependently organizations be-
ing impacted for different cyberattacks. Functional Depen-
dency Network Analysis is used for attacks propagation and
game theory for information sharing management. A frame-
work has been developed and the model has been tested in
a particular scenarios. Results using the simulation frame-
work suggest that info sharing generally improves the gen-
eral welfare. Moreover, we elucidate that nodes that receive
new attacks in the network provide information with higher
quality even though they do not benefit from the sharing
community, so they should be rewarded and motivated to
share such information. In general, our experimental work
shows that the proposed model can help to simulate net-
work conditions and adversarial settings to analyse benefi-
cial sharing policies, both in terms of particular nodes and
in terms of the general welfare of the sharing community. It
can be used as a part of a decision support system or during
countermeasure allocation processes.
7. ACKNOWLEDGEMENTS
This work was partially supported by the MINECO grant
TIN2013-46469-R and the CAM grant S2013/ICE-3095 CIBER-
DINE -CM funded by Madrid Autonomous Community and
co-funded by European funds.
8. REFERENCES
[1] A. Abdul-Rahman and S. Hailes. Supporting trust in
virtual communities. In System Sciences, 2000.
Proceedings of the 33rd Annual Hawaii International
Conference on, pages 9–pp. IEEE, 2000.
[2] M. Beckenkamp, H. Hennig-Schmidt, and F. P.
Maier-Rigaud. Cooperation in symmetric and
asymmetric prisoner’s dilemma games. MPI Collective
Goods Preprint, (2006/25), 2007.
[3] S. Boccaletti, V. Latora, Y. Moreno, M. Chavez, and
D.-U. Hwang. Complex networks: Structure and
dynamics. Physics reports, 424(4):175–308, 2006.
[4] C. Camerer. Behavioral game theory. New Age
International, 2010.
[5] B. Drabble. Information propagation through a
dependency network model. In Collaboration
Technologies and Systems (CTS), 2012 International
Conference on, pages 266–272. IEEE, 2012.
[6] J. W. Eaton, D. Bateman, S. Hauberg, and
R. Wehbring. GNU Octave version 4.0.0 manual: a
high-level interactive language for numerical
computations. 2015.
[7] A. K. Eric Luiijf. Sharing cyber security information.
(March), 2015.
[8] C. o. t. E. U. European Parliament. Directive of the
european parliament and of the council concerning
measures for a high common level of security of
network and information systems across the union,
2016. PE 26 2016 INIT - 2013/027 (OLP).
[9] J. Freudiger, E. De Cristofaro, and A. E. Brito.
Controlled data sharing for collaborative predictive
blacklisting. In Detection of Intrusions and Malware,
and Vulnerability Assessment, pages 327–349.
Springer, 2015.
[10] D. Gambetta et al. Can we trust trust. Trust: Making
and breaking cooperative relations, 13:213–237, 2000.
[11] P. R. Garvey and C. A. Pinto. Introduction to
functional dependency network analysis. In The
MITRE Corporation and Old Dominion, Second
International Symposium on Engineering Systems,
MIT, Cambridge, Massachusetts, 2009.
[12] J. Granatyr, V. Botelho, O. R. Lessing, E. E.
Scalabrin, J.-P. Barth`es, and F. Enembreck. Trust and
reputation models for multiagent systems. ACM
Computing Surveys, 48(2):27, 2015.
[13] C. Guariniello and D. DeLaurentis. Communications,
information, and cyber security in systems-of-systems:
Assessing the impact of attacks through
interdependency analysis. Procedia Computer Science,
28:720–727, 2014.
[14] J. L. Hernandez-Ardieta, J. E. Tapiador, and
G. Suarez-Tangil. Information sharing models for
cooperative cyber defence. In Cyber Conflict (CyCon),
2013 5th International Conference on, pages 1–28.
IEEE, 2013.
[15] T. W. House. National security presidential
directive/nspd-54. homeland security presidential
directive/hspd-23, 2008. NSPD-54/HSPD-23.
[16] P. Institue. 2016 cost of data breach study: global
analysis. Technical report, Ponemon Institute, 2016.
[17] L. R. Izquierdo, S. S. Izquierdo, and F. Vega-Redondo.
Learning and evolutionary game theory. In
Encyclopedia of the Sciences of Learning, pages
1782–1788. Springer, 2012.
[18] C. Karsberg and C. Skouloudi. Annual incident
reports 2014. Technical report, ENISA, 2015.
[19] M. Khouzani, V. Pham, and C. Cid. Strategic
discovery and sharing of vulnerabilities in competitive
environments. In Decision and game theory for
security, pages 59–78. Springer, 2014.
[20] S. Laube and R. B¨
ohme. Mandatory security
information sharing with authorities: Implications on
investments in internal controls. In Proceedings of the
2nd ACM Workshop on Information Sharing and
Collaborative Security, pages 31–42. ACM, 2015.
[21] J. Li. How to design a strategy to win an ipd
tournament. The iterated prisonerˆa ˘
A´
Zs dilemma,
20:89–104, 2007.
[22] G. Meng, Y. Liu, J. Zhang, A. Pokluda, and
R. Boutaba. Collaborative security: A survey and
taxonomy. ACM Computing Surveys (CSUR), 48(1):1,
2015.
[23] P. Naghizadeh and M. Liu. Inter-temporal incentives
in security information sharing agreements. In
Position paper for the AAAI Workshop on Artificial
Intelligence for Cyber-Security, 2016.
[24] G. Oliva, S. Panzieri, and R. Setola. Agent-based
input–output interdependency model. International
Journal of Critical Infrastructure Protection,
3(2):76–82, 2010.
[25] B. Petrenj, E. Lettieri, and P. Trucco. Information
sharing and collaboration for critical infrastructure
resilience–a comprehensive review on barriers and
emerging capabilities. International Journal of Critical
Infrastructures, 9(4):304–329, 2013.
[26] A. Rutkowski, Y. Kadobayashi, I. Furey, D. Rajnovic,
R. Martin, T. Takahashi, C. Schultz, G. Reid,
G. Schudel, M. Hird, et al. Cybex: The cybersecurity
information exchange framework (x. 1500). ACM
SIGCOMM Computer Communication Review,
40(5):59–64, 2010.
[27] M. Shor. ”payoff,” dictionary of game theory terms.
http://www.gametheory.net/dictionary/Payoff.html.
Accessed: 2016-07-11.
[28] L. B. Spijkervet. Less is more. Master’s thesis, Delft
University of Technology, 2014.
[29] S. Subramanian and D. e. a. Robinson. 2014
deloitte-nascio cybersecurity study. state governments
at risk: time to move forward. Technical report,
Deloitte, NASCIO, 2014.
[30] D. Tosh, S. Sengupta, C. Kamhoua, K. Kwiat, and
A. Martin. An evolutionary game-theoretic framework
for cyber-threat information sharing. In IEEE
International Conference on Communications, pages
7341–7346. IEEE, 2015.
[31] Y. WANG and C. Ng. Asymmetric payoff mechanism
and information effects in water sharing interactions:
A game theoretic model of collective. In International
Komosozu Society, Mt. Fuji, Japan, 2013, page 68.
IASC, 2013.
... For over forty years, economists have analyzed information sharing between organizations, most of the time with the involvement of proxies (e.g., trade associations) . About twenty years ago, these "old" models were taken up by security economics when information security emerged as a new application domain for economic reasoning Garrido-Pelaz, Gozalez-Manzano, & Pastrana, 2016). Today, extensive scientific literature confirms that SIS among human agents who operate information systems is conducive for improving the cybersecurity of these systems . ...
Chapter
Empirical studies have analyzed the incentive mechanisms that support security information sharing between human agents, a key activity for critical infrastructure protection. However, recent research shows that most Information Sharing and Analysis Centers – the most common institution organizing security information sharing – do not perform at Pareto optimal level, even when properly regulated. Using a meso-level of analysis, we close an important research gap by presenting a theoretical framework that links institutional economics and security information sharing. We illustrate this framework with a dataset collected through an online questionnaire addressed to all critical infrastructures (N=262) operating at the Swiss Reporting and Analysis Centre for Information Security (MELANI). Using descriptive statistics, we investigate how institutional rules offer human agents an institutional freedom to self-design an efficient security information sharing artifact. Our results show that a properly designed artifact can positively reinforces human agents to share security information and find the right balance between three governance models: A) public–private partnership, B) private or C) government-based. Overall, our work lends support to a better institutional design of security information sharing and the formulation of policies that can avoid non-cooperative and free-riding behaviors that plague the cybersecurity public-good.
Article
Blocklists constitute a widely-used Internet security mechanism to filter undesired network traffic based on IP/domain reputation and behavior. Many blocklists are distributed in open source form by threat intelligence providers who aggregate and process input from their own sensors, but also from third-party feeds or providers. Despite their wide adoption, many open-source blocklist providers lack clear documentation about their structure, curation process, contents, dynamics, and inter-relationships with other providers. In this paper, we perform a transparency and content analysis of 2,093 free and open source blocklists with the aim of exploring those questions. To that end, we perform a longitudinal 6-month crawling campaign yielding more than 13.5M unique records. This allows us to shed light on their nature, dynamics, inter-provider relationships, and transparency. Specifically, we discuss how the lack of consensus on distribution formats, blocklist labeling taxonomy, content focus, and temporal dynamics creates a complex ecosystem that complicates their combined crawling, aggregation and use. We also provide observations regarding their generally low overlap as well as acute differences in terms of liveness (i.e., how frequently records get indexed and removed from the list) and the lack of documentation about their data collection processes, nature and intended purpose. We conclude the paper with recommendations in terms of transparency, accountability, and standardization.
Full-text available
Conference Paper
Data sharing among partners---users, companies, organizations---is crucial for the advancement of collaborative machine learning in many domains such as healthcare, finance, and security. Sharing through secure computation and other means allow these partners to perform privacy-preserving computations on their private data in controlled ways. However, in reality, there exist complex relationships among members (partners). Politics, regulations, interest, trust, data demands and needs prevent members from sharing their complete data. Thus, there is a need for a mechanism to meet these conflicting relationships on data sharing. This paper presents, an approach to exchange data among members who have complex relationships. A novel policy language, CPL, that allows members to define the specifications of data exchange requirements is introduced. With CPL, members can easily assert who and what to exchange through their local policies and negotiate a global sharing agreement. The agreement is implemented in a distributed privacy-preserving model that guarantees sharing among members will comply with the policy as negotiated. The use of Curie is validated through an example healthcare application built on recently introduced secure multi-party computation and differential privacy frameworks, and policy and performance trade-offs are explored.
Full-text available
Conference Paper
The initiative to protect against future cyber crimes requires a collaborative effort from all types of agencies spanning industry, academia, federal institutions, and military agencies. Therefore, a Cybersecurity Information Exchange (CYBEX) framework is required to facilitate breach/patch related information sharing among the participants (firms) to combat cyber attacks. In this paper, we formulate a non-cooperative cybersecurity information sharing game that can guide: (i) the firms (players) 1 to independently decide whether to " participate in CYBEX and share " or not; (ii) the CYBEX framework to utilize the participation cost dynamically as incentive (to attract firms toward self-enforced sharing) and as a charge (to increase revenue). We analyze the game from an evolutionary game-theoretic strategy and determine the conditions under which the players' self-enforced evolutionary stability can be achieved. We present a distributed learning heuristic to attain the evolutionary stable strategy (ESS) under various conditions. We also show how CYBEX can wisely vary its pricing for participation to increase sharing as well as its own revenue, eventually evolving toward a win-win situation.
Full-text available
Book
The failure of a national critical infrastructure may seriously impact the health and well-being of citizens, the economy, the environment, and the functioning of the government. Moreover, critical infrastructures increasingly depend on information and communication technologies (ICT) or, in short, cyber. Cyber security and resilience are therefore seen as increasingly important governance topics and major challenges for today’s societies, as the threat landscape is continuously changing. Sharing cyber security related information between organisations – in a critical sector, cross-sector, nationally and internationally – is widely perceived as an effective measure in support of managing the cyber security challenges of organisations. Information sharing, however, is not an easy topic. It comes with many facets. For example, information sharing spans strategic, tactical, operational and technical levels; spans all phases of the cyber incident response cycle (proactive, pre-emption, prevention, preparation, incident response, recovery, aftercare/ follow up); is highly dynamic; crosses the boundary of public and private domains; and concerns sensitive information which can be potentially harmful for one organisation on the one hand, while being very useful to others. This Good Practice on information sharing discusses many of these facets. Its aim is to assist you as public and private policy-makers, middle management, researchers, and cyber security practitioners, and to steer you away from pitfalls. Reflect on the earlier lessons identified to find your own effective and efficient arrangements for information sharing which fit your specific situation.
Full-text available
Article
The analysis of risks associated with communications, and information security for a system-of-systems is a challenging endeavor. This difficulty is due to the complex interdependencies that exist in the communication and operational dimensions of the system-of-systems network, where disruptions on nodes and links can give rise to cascading failure modes. In this paper, we propose the modification of a functional dependency analysis tool, as a means of analyzing system-of-system operational and communication architectures. The goal of this research is to quantify the impact of attacks on communications, and information flows on the operability of the component systems, and to evaluate and compare different architectures with respect to their reliability and robustness under attack. Based on the topology of the network, and on the properties of the dependencies, our method quantifies the operability of each system as a function of the availability and correctness of the required input, and of the operability of the other systems in the network. The model accounts for partial capabilities and partial degradation. Robustness of the system-of-systems is evaluated in terms of its capability to maintain an adequate level of operability following a disruption in communications. Hence, different architectures can be compared based on their sensitivity to attacks, and the method can be used to guide decision both in architecting the system-of-systems and in planning updates and modifications, accounting for the impact of interdependencies on the robustness of the system-of-systems. Synthetic examples show conceptual application of the method.
Full-text available
Conference Paper
Although sharing data across organizational boundaries has often been advocated as a promising way to enhance security, collaborative initiatives are rarely put into practice owing to confidentiality, trust, and liability challenges. In this paper, we investigate whether collaborative threat mitigation can be realized via a controlled data sharing approach, whereby organizations make informed decisions as to whether or not, and how much, to share. Using appropriate cryptographic tools, entities can estimate the benefits of collaborating and agree on what to share in a privacy-preserving way, without having to disclose their entire datasets. We focus on collaborative predictive blacklisting, i.e., forecasting attack sources also based on logs contributed by other organizations and study the impact of different sharing strategies by experimenting on a real-world dataset of two billion suspicious IP addresses collected from Dshield over two months. We find that controlled data sharing yields up to an average 105% accuracy improvement, while also reducing the false positive rate.
Full-text available
Article
Critical considerations in engineering enterprise systems are identifying, representing, and measuring dependencies between suppliers of technologies and providers of services to consumers and users. The importance of this problem is many-fold. Primary is enabling the study of ripple effects of failure in one capability on other dependent capabilities across the enterprise. Providing mechanisms to anticipate these effects early in design enables engineers to minimize dependency risks that, if realized, can have cascading negative effects on the ability of an enterprise to deliver services to users. The approach to this problem is built upon concepts from graph theory. Graph theory enables (1) a visual representation of complex interrelationships between entities and (2) the design of analytical formalisms that trace the effects of dependencies between entities as they affect many parts and paths in a graph. In this context, an engineering system is represented as a directed graph whose entities are nodes that depict direction, strength, and criticality of supplier-provider relationships. Algorithms are designed to measure capability operability (or inoperability) due to degraded performance (or failure) in supplier and program nodes within capability portfolios that characterize the system. Capturing and analyzing dependencies is not new in systems engineering. New is tackling this problem (1) in an enterprise systems engineering context where multidirectional dependencies can exist at many levels in a system's capability portfolio and (2) by creating a flexible analysis and measurement approach applicable to any system's capability portfolio, whose supplier-provider relationships can be represented by graph theoretic formalisms. The methodology is named Functional Dependency Network Analysis (FDNA). Its formulation is motivated, in part, by concepts from Leontief systems, the Inoperability Input-Output Model (IIM), Failure Modes and Effects Analysis (FMEA), and Design Structured Matrices (DSM). FDNA is a new analytic approach. One that enables management to study and anticipate the ripple effects of losses in supplier-program contributions on a system's dependent capabilities before risks that threaten these suppliers are realized. An FDNA analysis identifies whether the level of operability loss, if such risks occur, is tolerable. This enables management to better target risk resolution resources to those supplier programs that face high risk and are most critical to a system's operational capabilities. KEY WORDS: Risk, capability risk, capability portfolio, dependencies, operability, inoperability, engineering systems, Leontief matrix, design structured matrix (DSM), failure mode and effects analysis (FMEA), inoperability input-output model (IIM), functional dependency network analysis (FDNA).
Article
FABR´ICIOFABR´FABR´ICIO ENEMBRECK, PPGIa: Graduate Program on Informatics – Pontifical Catholic University of ParanáParan´Paraná – PUCPR Finding reliable partners to interact with in open environments is a challenging task for software agents, and trust and reputation mechanisms are used to handle this issue. From this viewpoint, we can observe the growing body of research on this subject, which indicates that these mechanisms can be considered key elements to design multiagent systems (MASs). Based on that, this article presents an extensive but not exhaustive review about the most significant trust and reputation models published over the past two decades, and hundreds of models were analyzed using two perspectives. The first one is a combination of trust dimensions and principles proposed by some relevant authors in the field, and the models are discussed using an MAS perspective. The second one is the discussion of these dimensions taking into account some types of interaction found in MASs, such as coalition, argumentation, negotiation, and recommendation. By these analyses, we aim to find significant relations between trust dimensions and types of interaction so it would be possible to construct MASs using the most relevant dimensions according to the types of interaction, which may help developers in the design of MASs.
Conference Paper
We investigate the incentives behind investments by competing companies in discovery of their security vulnerabilities and sharing of their findings. Specifically, we consider a game between competing firms that utilise a common platform in their systems. The game consists of two stages: firms must decide how much to invest in researching vulnerabilities, and thereafter, how much of their findings to share with their competitors. We fully characterise the Perfect Bayesian Equilibria (PBE) of this game, and translate them into realistic insights about firms’ strategies. Further, we develop a monetary-free sharing mechanism that encourages both investment and sharing, a missing feature when sharing is arbitrary or opportunistic. This is achieved via a light-handed mediator: it receives a set of discovered bugs from each firm and moderate the sharing in a way that eliminates firms’ concerns on losing competitive advantages. This research provides an understanding of the origins of inefficiency and paves the path towards more efficient sharing of cyber-intelligence among competing entities.
Conference Paper
New regulations mandating firms to share information on security breaches and security practices with authorities are high on the policy agenda around the globe. These initiatives are based on the hope that authorities can effectively advise and warn other firms, thereby strengthening overall defense and response to cyberthreats in an economy. If this mechanism works (as assumed in this paper with varying effectiveness), it has consequences on security investments of rational firms. We devise an economic model that distinguishes between investments in detective and preventive controls, and analyze its Nash equilibria. The model suggests that firms subject to mandatory security information sharing 1) over-invest in security breach detection as well as under-invest in breach prevention, and 2), depending on the enforcement practices, may shift investment priorities from detective to preventive controls. We also identify conditions where the regulation increases welfare.
Article
Among authors, researchers and governmental agencies, information sharing and collaboration have been recognised as a critical part for improving crisis response effectiveness and efficiency, since no single organisation has all the necessary resources, possesses all the relevant information or owns expertise to cope with all types of extreme events. This work presents a review study on general issues and barriers to information sharing and collaboration during CI crisis response. Emerging concepts and capabilities that are promising for making an improvement in the field, such as NEO, SOA and SOA-based NEO, are also presented and discussed. Possible contribution to CI protection and resilience (CIP/R) is discussed concerning the importance of matching organisational structure characteristics, technological capabilities and sociological influence. The needs and opportunities for future research are also highlighted, emphasising the need for a comprehensive framework of analysis and deployment.