ArticlePDF Available

On market concentration and cybersecurity risk

Authors:
  • Concinnity Risks

Abstract and Figures

Market concentration affects each component of the cybersecurity risk equation (i.e. threat, vulnerability and impact). As the Internet ecosystem becomes more concentrated across a number of vectors from users and incoming links to economic market share, the locus of cyber risk moves towards these major hubs and the volume of systemic cyber risk increases. Mitigating cyber risk requires better measurement, diversity of systems, software and firms, attention to market concentration in cyber insurance pricing, and the deliberate choice to avoid ubiquitous interconnection in critical systems.
Content may be subject to copyright.
Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=rcyb20
Journal of Cyber Policy
ISSN: 2373-8871 (Print) 2373-8898 (Online) Journal homepage: https://www.tandfonline.com/loi/rcyb20
On market concentration and cybersecurity risk
Dan Geer, Eric Jardine & Eireann Leverett
To cite this article: Dan Geer, Eric Jardine & Eireann Leverett (2020) On market concentration and
cybersecurity risk, Journal of Cyber Policy, 5:1, 9-29, DOI: 10.1080/23738871.2020.1728355
To link to this article: https://doi.org/10.1080/23738871.2020.1728355
© 2020 The Author(s). Published by Informa
UK Limited, trading as Taylor & Francis
Group
Published online: 24 Feb 2020.
Submit your article to this journal
Article views: 232
View related articles
View Crossmark data
On market concentration and cybersecurity risk
Dan Geer
a
, Eric Jardine
b
and Eireann Leverett
c
a
In-Q-Tel, Cambridge, USA;
b
Virginia Tech, Blacksburg, USA;
c
Concinnity Risks, UK
ABSTRACT
Market concentration aects each component of the cybersecurity
risk equation (i.e. threat, vulnerability and impact). As the Internet
ecosystem becomes more concentrated across a number of
vectors from users and incoming links to economic market share,
the locus of cyber risk moves towards these major hubs and the
volume of systemic cyber risk increases. Mitigating cyber risk
requires better measurement, diversity of systems, software and
rms, attention to market concentration in cyber insurance
pricing, and the deliberate choice to avoid ubiquitous
interconnection in critical systems.
ARTICLE HISTORY
Received 26 July 2019
Revised 12 December 2019
Accepted 3 January 2020
1. Introduction
Trends towards market concentrations are the new normal in IT systems, on the Internet,
and across the World Wide Web. Concentration is often the cumulative result of myriad
small, independent, and freely taken choices, though the deliberate acts of organisations
to absorb competition ought not be minimised. Online, mechanisms of preferential attach-
ment often dominate (Barabási and Albert 1999; Barabási 2014; Jardine 2017a). People
tend to frequent online shops, services, and software that is already widely used. Every-
body uses Facebook, Microsoft, Apple, Tik Tok, Snapchat, Whatsapp or Google, to name
a few, because everyone else uses these same services. In many instances, the reward
for interconnection, as famously dictated by Metcalfes Law, is proportional to the
number of possible two-way connections, that is, proportional to the number of nodes
squared (Metcalfe 1995; Metcalfe 2013; Zhang, Liu, and Xu 2015). The implication is that
the greatest user and corporate value is often found at the most frequented places.
The cumulative result of numerous independent choices is a space with extraordinarily
big platforms, systems and providers. The examples of such points of concentration range
across operating systems (e.g. Microsoft Windows; Apple OS), protocols (e.g. WPA2; WiFi),
e-commerce sites (e.g. Amazon; Alibaba), social networking sites (e.g. Facebook; Twitter),
content delivery networks (e.g. Akamai; Cloudare), cloud computing services (e.g. AWS;
Azure), antivirus vendors (e.g. Symantec; ESET), aggregation platforms (e.g. Reddit), and
so forth.
This tendency towards market concentration has a number of eects on systemic cyber
risk, many of which mirror similar concentration risk eects in other sectors of the
economy and society (Gürtler, Hibbeln, and Vöhringer 2010; Mandelbrot 2013; Zhang
© 2020 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/
licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
JOURNAL OF CYBER POLICY
2020, VOL. 5, NO. 1, 929
https://doi.org/10.1080/23738871.2020.1728355
et al. 2013; Dhaliwal et al. 2016). These eects play out at every level of the cyber risk
equation, that is, across threat, vulnerability and impact. While in no way exhaustive of
the complexity in this domain, this paper details three theses on the relationship
between concentration and cyber risk.
Thesis One: Market concentration has risk redistribution eects, often changing who gets tar-
geted. Large operating systems, platforms, protocols and organizations often act as magnets
for malicious activity.
Thesis Two: Market concentration can increase systemic vulnerability. Smaller organizations
can sometimes improve their security posture by transferring their security to larger organiz-
ations. The degree of security improvement (if any) depends, however, on the volume of
attacks, the distribution of attacks across organizations, and base rates of security performance
across rms. In many cases, transfers of security to larger providers, such as Cloud services or
CDNs, can actually increase levels of vulnerability under quasi knowable circumstances.
Optimization is key.
Thesis Three: When points of major market concentration fall to malicious attack, the impact is
far more signicant than in more distributed systems due to signicant (and often under
recognized) issues of scale, complexity and ecosystem interdependency.
Overall, these three theses suggest that market concentration tends to have both redistri-
butive and accentuating eects on cyber risk. Market concentration, broadly dened,
tends to shift the risk of being attacked from individuals and minor nodes towards
major players. Market concentration can also exacerbate systemic vulnerability and signi-
cantly increase the odds of high impact, system-wide failure. Dealing with cyber risk
requires dealing with market concentration.
To make our case, the article unfolds this way: The following sections rst detail the
cyber risk (i.e. threat, vulnerability and impact) equation and dene our conceptualis-
ation of market concentration. Extending that line of argument, each section plots
the interaction of patterns of market concentration and a discrete part of cyber risk,
showing in detail and with illustrative examples how the former inuences the latter.
We conclude with policy suggestions for mitigating cyber risk in highly concentrated
systems.
2. Dening cyber risk and market concentration
Cyber risk, like risk in any adversarial space more generally, is a function of threat, vulner-
ability and impact (Cox 2008; Hubbard and Seiersen 2016; Coburn, Leverett, and Woo
2019). The threat component of the equation is the probability that an organisation will
be targeted by malicious activity. It can range from 0 per cent, not attacked, to 100 per
cent, denitely attacked in a given period. The vulnerability component of the cyber
risk equation captures what happens if an organisation is targeted by a malicious actor.
Some intrusion attempts will fail. Others will succeed. The vulnerability component cap-
tures this variance by assigning a probability that a targeted organisation can mount a suc-
cessful defence, given both the competence of the attacker and the skills of the defender.
This component, too, ranges from 0 per cent to 100 per cent, i.e. always fails at security to
always succeeds. The nal component of the formula is impact, which denotes the dollars,
hours of downtime, or some other measure of fallout from a malicious attack. Multiplied
10 D. GEER ET AL.
together, the three components provide a sense of an organisations level of cyber risk
over a given period.
Formally, the scope of market concentration in an ecosystem can be measured by the
Herndahl-Hirschman Index (HHI). The HHI is measured as the sum of the square of each
organisations market share. The result of this mathematical process is a number ranging
from 1 to 10,000, with perfect monopoly at 10,000 and a perfectly competitive system at
an HHI value of 1 (Hirschman 1964; Hirschman 1980; Rhoades 1993). In practice, markets
with an HHI above 2,500 are considered very concentrated.
More amorphously than a precise HHI number for each sector of the economy and
Internet, concentration for our purposes can take several forms. Market concentration is
about size, scale, centralisation and interconnectedness. It is, in the broadest terms, the
amount of records, dollars, users, decision-making prerogative or degree centrality (incom-
ing/outgoing links) a platform, provider, protocol, code base or organisation has when
compared to the sum of all other like entities. Concentration can unfold along geographi-
cal space, time and name space. Systems with high levels of market concentration have,
relative to the rest of the participants in that area of activity, big companies, big platforms,
big providers, big intermediaries, big everything. Framed dierently, concentrated organ-
isations, platforms, or protocols have lots of links, customers, users or clients. Concentrated
systems are ones where the choices of a big few aect the outcomes of the many. Con-
centrated systems can also be viewed as the opposite of diverse systems, meaning their
component parts are more similar than not. Concentrated code bases, for instance, can
be marked by something as simple as the proportion of used software written in the
same programming language, e.g. Cobalt, anyone?
Concentration has several benets economically. For example, rms plausibly emerge
at all in order to contend with market-based transaction costs (Coase 2012). Bigger rms
can also leverage their size to capture increasing return to scale and scope, improve their
eciency through greater task specialisation, and make longer term investments that
maximise their revenue and prot growth. Too much scale, centralisation and concen-
tration, of course, can each produce their own set of economic disadvantages, but
there are clear economic reasons why organisations tend to get bigger, centralise their
decision-making, and expand their proverbial footprints.
Yet concentration, as we detail in depth in the following sections, can also result in
security/cyber risk consequences. Table 1 presents a non-exhaustive set of examples of
the types of concentration we have in mind when we use the phrase, market concen-
tration.Table 1 also details: (1) the reason it might be economically rational to concentrate
Table 1. Forms of concentration and their economic, security, and cyber risk consequences.
Type of concentration Potential economic benets Potential security consequences
Users Network externalities The value of breaching the system
grows supra-linearly as the user base
increases
Homogeneous code base Wide labour pool of potential
coders to develop new platforms
Lower cost to develop and employ malware at
the same or greater reward to the attacker
Homogenous platform use Lower training and maintenance
cost for a system
Increase the value of a vulnerability in the
platform for an attacker
Centralised decision-making
(ecosystem or organisational)
Improved eciency of decisions;
clear strategic direction
Homogeneous outcomes, if exploitable, lead to
widespread compromise
Interconnection Improved quality of service;
reduced cost for components
Third party risks and ripple eects
JOURNAL OF CYBER POLICY 11
and (2) some of the security consequences that can emerge from each type of concen-
tration when viewed from an attackers perspective.
Table 1 is meant to highlight some of the diversity of possible cases that could be
included under a comprehensive analysis of the interrelationships between market con-
centration and cyber risk. Our analysis below draws upon dierent cases that exemplify
certain patterns and trends. We selected the cases largely for their data richness. They
are not systematic, but are indicative of what the general theses in each section
suggest we ought to observe (King, Keohane, and Verba 1994). In other words, the
cases do not suggest that concentration always produces the detailed eects on security,
only that the logic is there and some supportive evidence can be readily found.
Next, we detail how concentration interacts discretely with each component of the
cyber risk equation and generally across all three component parts. These interactions
aect both the distribution of risk and its amount of risk present in the system.
3. Thesis 1: market concentration and threat
Given a set of attack interests, capabilities and capacities, the threat component of the risk
equation is about who gets attacked in the rst place. More formally, and from the defen-
ders perspective, it is the probability that a malicious actor will expend the resources and
time to try and compromise a given targets systems. Threat matters because an organis-
ation can be said to have zero current risk from malicious actors if it is never attacked. Of
course, zero past risk only correlates with, and does not determine, future risk.
Not all organisations or individuals are equally likely to be targeted by malicious attacks,
and some attacks are opportunistic rather than targeted (or rather they target a technol-
ogy such as default SSH credentials, instead of an organisation or individual). Certainly,
some famous cyber intrusions are simple attacks of convenience. Equifaxs failure to
patch Apache Struts in 2017 created an easy way in for whoever compromised the
system, suggesting more of an attack of opportunity than intent per se, though revelations
of potential Chinese state involvement complicate the previously known story (Newman
2017). Other attacks, such as those that targeted the Democratic National Committee
before the 2016 election, are highly deliberate, targeted and persistent.
Concentration directly aects the threat component of the risk equation by changing
the malicious actors opportunity space. Concentration, in this sense, has predominately
distributional eects on the threat component of cyber risk, namely, concentration
leads to risk transference. Who gets targeted is a function of potential reward for malicious
actors, which is, in turn, a function of a potential targets size or network centrality.
Increased size and position correlate with malicious actor payout, whether that reward
is nancial, disruptive or geopolitical.
As a market concentrates, the risk of being targeted by a cyberattack transfers from small,
less central organisations to the major hubs or their counterparties (e.g. the Target Breach).
For smaller organisations that might not be able to assume the nancial cost of greater
levels of cybersecurity, the transference of risk from small to big organisations would be a
blessing. From the perspective of the highly concentrated organisations, the increased
volume of attacks that need to be dealt with impose additional operating costs. In the Inter-
net economy, being a major hub is often necessary for protability (Jardine 2017a), but it
also means that your platform will act like a magnet for malicious activity.
12 D. GEER ET AL.
The history of malware targeting personal computer operating systems exemplies this
process of risk transference. For years, Windows OS was heavily targeted by malware
designers. Mac OS, in contrast, maintained a mythology of a malware free system. This
initial state of aairs, however, was not necessarily just a function of Macs having better
OS security; divergent malware development rates for the two platforms is well explained
by simple economic incentives.
Malicious actors should want to attack points with the highest reward ratio. And indeed,
a formal model shows that market share drives malware development markets. Assuming
equal levels of OS vulnerability, malware designers will target only the OS with the largest
market share when the ratio of the dominant OS to the next largest OS is greater than the
protection potential of the dominant OS overall (ODonnell 2008). This formal model
suggests that rational malicious actors would not even waste time designing malware
to target Mac OS until it accounted for at least one-sixth of the OS market. Other formal
models show similar trends in the concentration of malicious actor activity on dominant
platforms (Arce 2018). In risk-based terms, the concentration of users on Windows OS
for much of the history of personal computing had a redistributive eect on the threat
component of risk. It transferred the risk of being targeted by malicious actors in the
rst place from Mac and Linux OS users to those who used Windows.
These earlier trends have had a legacy eect on risk among software platforms, making
Windows still a more targeted system than alternative operating systems. But the nal
point that this example illustrates is more general: points of concentration promise the
most reward and will be targeted the most frequently. Users of secondary systems, plat-
forms or services might give up some benets in terms of coordinated eciency or posi-
tive network externalities, but might also reduce their cyber risk.
4. Thesis 2: concentration and vulnerability
The vulnerability component of cyber risk captures the expected/realized outcome of an
attack on an organisation. It is a simple truism that attempted attacks sometimes succeed
and sometimes fail. Attacks vary considerably in their sophistication and persistence.
Organisational defenses, both technical and human, similarly vary, ranging from thorough
to haphazard. Together, the interaction of the sophistication of the attacker, the defensive
prole of the organisation, and a dose of luck produces some volume of successful and
failed intrusions. In cyber risk terms, each organisation assuming they are attacked in
the rst place has a probability of being compromised over a given period. At the
extremes, well-defended networks faced with unsophisticated attackers are likely to
remain secure, while poorly defended networks faced oagainst competent and persist-
ent opponents are likely to be breached.
Concentration aects the vulnerability component of the cyber risk equation. Three dis-
tinct eects emerge, each of which is unpacked more below. First, concentration in large
providers can lead to an average reduction in the individual vulnerability of users, website
operators and organisations due to the often greater security services provided by these
major hubs. Second, since concentrated nodes tend to get targeted more often, joint
probabilities suggest that they might be vulnerable across a large enough sequence of
attacks. Third, platform and service concentration can also lead to increases in complexity,
which can have negative eects at the level of software, increasing the proportion of
JOURNAL OF CYBER POLICY 13
vulnerable code and worsening time-to-patch rates, leaving more systems vulnerable for
longer windows of time. The rst eect suggests that individuals can leverage scale to
reduce their risk. The second and third suggest that at a systemic level, concentration
can increase vulnerability and worsen cyber risk.
4.1. A rst positive: professionalism, scale and security
Concentration is not all bad. Some individuals and organisations can improve their cyber
risk prole, to a degree, by using concentrated services. Concentrated nodes can be, on
average, better at providing security than less concentrated systems. The nancial
sector provides an example. Smaller rms (measured by revenue) actually suer larger
direct losses from cyber security events than bigger rms. This outcome is possibly due
to lower absolute investment in IT security(Bouveret 2018, 8). Security for smaller, less
concentrated rms can be improved by transferring some or all of their security provision
to bigger players. These security gains can be exemplied through the example of anti-
virus software (AV), though other services such as spam lters or content delivery networks
behave similarly. In each case, a combination of professionalism and scale advantages
translates over into increased security eectiveness to some degree.
AV technology can allow users to leverage concentrated resources nested within large
commercial rms to better protect the security of their devices. Scale eects are not the
same as the eects of professionalisation. AV allows users to outsource personal device pro-
tection to a third party. By employing AV, users basically say that dealing with malware is
better done by professionals. Malware development occurs at a remarkably fast pace.
According to the AV Test Institute, upwards of 350,000 new malware variants are agged
every day (AV Test Institute 2019). In May of 2019 alone, 9.31 million new malware variants
were recorded. Contending with this volume of malicious activity is challenging for individ-
ual users and organisations. Using AV is often a rational protective measure.
Professional services can provide better security than average users could produce on
their own. While such security outsourcing can lead to maladaptive behavioural changes
that may ultimately worsen security outcomes especially over the short run (Jardine n.d.),
the record of the empirical eectiveness of commercial AV systems suggests there is
malware protection to be had. One slightly older empirical investigation of AV eective-
ness, for instance, found that 62 per cent of malware is eectively stopped at the
waters edge at the time of rst exposure. Up to another 16.5 per cent is identied and
blocked within one months time (Sukwong, Kim, and Hoe 2011).
Not all of this eectiveness, however, follows from the work of professionals. Market
concentration also matters. Identication of new malware requires a birds eye view of
network activity and trends, best had from a large user base and sensor network.
Market concentration, in this case, is an eective driver of improved malware identi-
cation. The more users or sensors that an AV company has looking for new malware var-
iants, the greater the ability of the company to identify emergent problems, produce
signatures to protect users from new malware, and, by extension, provide better security
for users. Clearly, some portion of the protections that are to be had from using AV come
from outsourcing security from the individual to a professionalised third party. Yet some
distinct gains also plausibly emerge as a function of the size of the third party itself: more
sensors and users make AV malware detection better.
14 D. GEER ET AL.
Spam lters in email would be another complementary example. Gmail can block
spam very eectively by leveraging its signicant user base to both crowdsource
potentially malicious messages through manual tags and automatically detect spam
and phishing attempts through automated matching of phrases given their huge
corpus of available text. Content delivery networks provide similar benets. CDNs can
leverage in a coordinated way their capacity to deal with network trac. The larger the
CDN, the more capacity they can marshal and the better the protection they can, on
average, provide. Other examples beyond these more technical services likewise apply.
For instance, the Payment Card Industry Security Standards Council is eective at
setting security standard adoption (such as PCI DSS) precisely because they are concen-
trated (Woods and Moore 2019).
In this sense, individual cyber risk can sometimes be reduced by concentrating security
resources. Individuals alone are vulnerable. Individuals relying on concentrated rms can
improve their baseline vulnerability rate considerably. Using commercial AV, content deliv-
ery networks, large email clients or cloud computing resources can, in other words, result
in improved cyber risk outcomes for individual users.
4.2. A rst negative: concentration, security and repeated attacks
Yet, even the extra security that the concentrated hubs can provide can also be over-
whelmed. Here, the eect of market concentration on the threat component of cyber
risk (i.e. who gets targeted in the rst place) has a knock-on eect on the vulnerability
of major nodes. The key mechanism at work is what is known as joint probability. The sim-
plest example is what happens every time you ip a coin. Each round, regardless of what
happened before, there is a 50 per cent chance that the outcome will be either heads or
tails. The last fty ips could have turned up heads, yet the next round will still be heads
with a 50 per cent probability. This notion jars intuition and is in fact correct only when
each toss is considered as a discrete event. Looking at the outcomes of the various coin
ips as a sequence provides a fundamentally dierent answer. The probability that 51
sequential coin ips will come up heads is terrically small (like 4.44e-16 small).
Joint probability is likewise a problem for cyber risk, and concentration can exacerbate
this issue. If the concentrated points of the ecosystem those with a lot of users, clients,
incoming links, etc. get targeted by malicious actors more because of their size, then
they need to fend ohuge volumes of attacks. Over most meaningful amounts of security
incidents, the probability that the organisation that is defending a network will be success-
ful across all attacks falls to nearly zero remarkably quickly.
Imagine, for example, three separate organisations.Givenanecosystemofattackers
of various competency, assume that one has a 90 per cent probability of successful
defence (Organization A). Another (Organization B) has a 99 per cent probability of suc-
cessful defence. The last is very secure, with a 99.9 per cent (Organization C) probability
of successful defence. Each of these organisations has a reasonably good chance of
defending against any individual intrusion attempt. In other words, every individual
port scan or phishing email would be deected 90 per cent, 99 per cent, or 99.9 per
cent of the time.
But, what happens across these organisations over a sequence of otherwise indepen-
dent attacks is another story (Figure 1). As with the ip of 51 heads in a row, the
JOURNAL OF CYBER POLICY 15
probability that the hypothetical organisations can successfully defend against every
attempted attack drops to nearly zero in no time. For Organization A, successful
defence across the full sequence of attacks drops below 1 per cent by the 44th
attack (90%^44). Organization B remains resilient for longer, but drops below a 1 per
cent probability of successful defence across all attacks by the 459th intrusion
attempt (99%^459). Organization C, nally, is by far the most robust of the three (point-
ing to the exponential eects of joint probabilities). At the 459th attack, Organization C
still has a 63.18 per cent chance of successfully defending against all attempted intru-
sions. In fact, it takes 4,608 attempted attacks for the probability of successful
defence for Organization C to fall below the 1 per cent mark (99.9%^4,608). This is
clearly a much-improved outcome, but still potentially insucient at Internet scale.
For context, the US Department of Defense alone receives 36 million potentially mali-
cious (non-legitimate) emails per day (Konkel 2018).
Joint probability also lends some clues as to the best security-maximizing step for
smaller rms. A common argument in favour of using Cloud providers or CDNs is that
their scale contributes to improved security performance, making them a better choice
for smaller organisations than in-house security. Imagine the same top-performing organ-
isation as above, which is eective at security in a single run 99.9 per cent of the time.
Picture this organisation as a large concentrated rm, akin to a CDN or major Cloud pro-
vider. Imagine further a smaller organisation that needs to provide security without any
internalised benets of scale and is eective 75 per cent of the time. The smaller organis-
ation would be better omoving its security provision to the larger organisation in the
event that they were both attacked once. In a hypothetical one-shot world, the smaller
rm would be 24.9 percentage points more secure by transferring its security provision
to the larger organisation.
Figure 1. Repeated attacks and the vulnerability of highly targeted hubs.
16 D. GEER ET AL.
In a world of multiple and unevenly distributed attacks (i.e. the real world), the best
security-maximizing choice for the smaller rm becomes less clear and subject to a
process of optimisation. The distribution of malicious activity, the total volume of
attacks in the ecosystem, and the baseline security performance of the organisations in
question all interact to determine the optimal choice. Figure 2 showcases three hypothe-
tical attack-clustering scenarios.
The rst scenario is one of extreme clustering. The comparative net target value of the
big rm is so great that all attacks go towards the concentrated organisation in the eco-
system. Since the threat component of the cyber risk equation then equals zero for the
small organisation (i.e. zero attacks * 75% successful defense * impact = zero), the small
organisation is better onot transferring its security provision to the larger, better at secur-
ity organisation. Linking the fate of the small rm to the big rm if the big rm suers from
all the attacks in the ecosystem can result in rapidly overwhelmed defences and a worse
security outcome for the smaller organisation. Indeed, in a world where the small rm
simply passes under the radar of a malicious actor, any volume of attacks makes coupling
the small organisations fate to the bigger rm a bad security deal.
In a world without a perfect clustering of attacks, the smaller organisation needs to opti-
mise. In the second scenario in Figure 2, the smaller organisation does get attacked one
time during the period in question. It has a 75 per cent chance of successfully defending
against the attempted intrusion. From the small organisations perspective, it can get more
security by transferring its security provision to the larger organisation, as long as that
organisation is attacked 287 or fewer times. Any more attacks, and it again becomes
rationale for the smaller organisation to maintain its own security, everything else being
equal. The last scenario is like the second, but the smaller organisation gets attacked
twice over the hypothetical period. The rate of clustering of attacks against the bigger
organisation, in other words, is slightly less than before. In this case, the joint probability
that the small organisation will successfully defend against both attempted intrusions is
56.25 per cent (i.e. 75%^2). There is, as a result, a wider range of security improvements
that could be had if the small organisation transfers its security provision to the larger
organisation. Indeed, if the bigger organisation is attacked 575 times, then the two
Figure 2. Outsourcing security in a world of clustered attacks.
JOURNAL OF CYBER POLICY 17
organisations are equally vulnerable. Any fewer attacks against the bigger rm and the
smaller organisation would be better oletting the big organisation do the heavy
lifting of security.
All this suggests that a concentrated ecosystem, by attracting malicious activity to the
main hubs, also potentially produces negative knock-on eects for the vulnerability com-
ponent of the cyber risk equation. Over enough attempted intrusions, the joint probability
of successful defence across a large number of attacks asymptotically approaches zero.
This analysis also highlights the important security optimisation choices that smaller,
less targeted organisations need to make. Sometimes transferring security to a larger
and more skilful organisation can be of benet, but it depends upon how clustered the
attempted intrusions in the ecosystem are, how many attacks there are in total, and
what base rates of security performance are at play.
Additionally, the analysis showcases the importance of multilayered organisational resi-
lience, which could be conceptualised as placing sequential barriers in front of malicious
actors as opposed to a single external rewall model. Interestingly, multiple barriers
through which a malicious actor needs to successfully pass in order to gain meaningful
access to a network or data ips the joint probability analysis above on its head: with
each new well-defended gate through which a malicious actor needs to pass, it
becomes exponentially more likely that the network will remain secure. For some rms
such as CDNs or Cloud providers, maintaining redundant systems that are suciently dis-
tinct (perhaps even geographically distributed) would help to make joint probabilities
work in the favour of the defender.
Yet, even factoring in resiliency, the defender needs to be eective at their primary duty
every time, while the attacker need only succeed once to compromise the CIA triangle
(condentiality, integrity and access) at the core of information protection (Geer 2018b).
Market concentration can make that harder to do.
4.3. The second negative: concentration, scale and complexity
Concentration is multidimensional. Concentration involves scale, which is part of the
reason why individuals can potentially improve their security prole by relying on large
services. Concentration as it implies scale also involves increased complexity. Nothing
big is simple. Issues of complexity can play out at two levels: the baseline volume of
faulty code and the time it takes larger organisations to patch aws.
The largest edices of code are, everything else being equal, likely to be the most
complex. The link between concentrations of software use and software complexity is
fairly clear. Highly used software will have some combination of a decisive incumbent
user advantage over existing alternatives, a strong competitiveness factor that helps
unseat incumbents in a world of preferential attachment (Barabási and Albert 1999; Bar-
abási 2014) and diuse functional design, allowing people to accomplish multiple task
and potentially innovate, depending upon its openness, on top of the code (Zittrain
2008). In such cases, but especially with regard to wide-ranging function, more complexity
of code tends to be correlated with more users.
Yet, by dint of that very complexity, concentrations of code are also the most likely to
contain defective routines that can be found and exploited by malicious actors. One part
of the equation is simply a function of the number of lines of code: more code, more
18 D. GEER ET AL.
potential defects, everything else being equal. But the accumulation of vulnerabilities
might not be constant at scale, since scale entails complexity and, as Steven Johnson
puts it, complexity characterizes the behaviour of a system or model whose components
interact in multiple ways and follow local rules, meaning there is no reasonable higher
instructions to dene the various possible interactions(Johnson 2001, 19). Put another
way, the possibility of a aw is not linearly more likely as the edice of code grows. It is
something else and it is hard to even say what rate of increase might be at play by dint
of its very complexity.
The problem is particularly acute in todays global market for platforms and services.
The market, especially with newer technologies such as the Internet of Things, is based
around rapid innovation, rst-to-market behaviour, and the use of stock code with overlaid
tweaks for function. Products are often developed, sold, shipped, and then security is
layered on afterwards if the market wants the good and can sustain the product. This
pattern gives rise to persistent vulnerabilities, that are easy to introduce, slow to be discov-
ered, and often dicult to x because they are so embedded in how a product is archi-
tected. These vulnerabilities can be weaponized. When they are, unpatched systems
become vulnerable to malicious intrusion. Framed dierently, the result of the need for
speed is reused code, which is often selected based upon its expressiveness (Turing-com-
pleteness). The trouble is, the very code that has the greatest probability of being reused is
the code that has the greatest probability of being rich enough in complexity to obscure
exploitability(Geer 2018a). A few simple lines of code are likely to be error free; a complex
edice of code is disproportionately likely to have numerable hidden vulnerabilities that
increase vulnerability and attendant cyber risk. Code that is self-modifying (think
machine learning) may not even be analysable (Geer 2019).
Isolated aws are xable. When found, patches can be developed and software can be
updated. With static code, constant observation and sucient time, any given code base,
no matter how complex, will eventually rid itself of most errors (Ozment and Schechter
2006). Unfortunately, at least two of these conditions often do not exist, as code is
rarely static and time is always of the essence. Regardless, assuming no adverse inter-
actions, the faster software can be patched, the less vulnerable users of that software
become. Software patches are, however, embedded in complex socio-technical systems,
involving software designers, bug hunters, malicious actors, operators of the systems
running the code and users who often demand continuous service. All that is to say:
just because a vulnerability exists does not mean it will be found; just because it is
found does not mean it will be exploited; just because it is exploited does not mean it
will be patched; and just because it is patched does not mean, necessarily, that the
patch will be deployed in any sort of polynomial time (Ablon and Bogart 2017; Herr, Schne-
ier, and Morris 2017; Geer 2019).
Out of this socio-technical assemblage of software development and maintenance
comes a simple security metric, known as time-to-patch rates or remediation velocity
(Kenna Security and Cyentia Institute 2019). Time to patch is the time it takes for an
organisation to correct a certain proportion of vulnerabilities on its systems after the
announced discovery of the vulnerability and the issuance of a correction by a
vendor. Generally, faster patch rates are better, but organisations need to balance a mul-
titude of incentives and costs, meaning that immediately patching all vulnerabilities is
neither cost eective or even always a risk reducing step. After all, installing patches
JOURNAL OF CYBER POLICY 19
can also break things and prevent you from doing business, which may cost more than
suering a breach.
Many factors aect organisational time-to-patch rates (Kenna Security and Cyentia Insti-
tute 2019). Organisational size, itself directly inuenced by patterns of market concen-
tration, also matters. Concentration can have three potential eects on time-to-patch
rates through the concept of returns to scale. First, increased scale might improve patch
rates by reducing the average time-to-patch as scale increases (a pattern of increasing
returns to scale). Larger rms, for example, might have disproportionately more resources
to devote to patching vulnerabilities. Second, scale might matter little at all for remedia-
tion velocity (constant returns to scale). Lastly, there could be maladaptive eects at work,
where increased scale leads to a worsening time-to-patch rate (decreasing returns to
scale). For example, a larger organisation might be more centralised and complex, gener-
ating bureaucratic choke points that limit the timely remediation of security vulnerabilities.
Each scenario is possible, but empirically scale tends to have a negative eect on reme-
diation velocity (Kenna Security and Cyentia Institute 2019). Data from Kenna Security and
The Cyentia Institute plotted in Figure 3 illustrate just how much worse medium and large
rms fare compared to smaller organisations on the time-to-patch measure, across both
exploited and not yet exploited vulnerabilities. In a total sample of 300 separate enter-
prises, smaller rms (1500 employees) routinely outperform both medium (5005,000
employees) and large (more than 5,000 employees) organisations. Consistent with the
idea that increasing scale and complexity are a harm to patch rates, the largest gap is
between small and large rms, with medium size organisations falling in the middle in
terms of performance. For instance, compared to small rms, it takes a median time of
nine days longer for large organisations to patch 25 per cent of unexploited vulnerabilities,
47 days to patch 50 per cent, and 157 days to patch 75 per cent (Kenna Security and
Cyentia Institute 2019). All rms take less time to patch exploited vulnerabilities, which
makes sense since these aws are known to be weaponized. However, larger rms once
again take the longest time to patch 25 per cent, 50 per cent or 75 per cent of known,
Figure 3. Time-to-patch and the cyber risk of large organisations.
20 D. GEER ET AL.
exploited vulnerabilities in their systems. The evidence suggests that scale harms organ-
isational patch rates, making more systems vulnerable for longer. Market concentration
can worsen systemic vulnerability, generating more cyber risk. While individuals can, at
times, improve their personal security by using large services for AV protection, website
content delivery, or as an email provider, scale sometimes plays out badly for cyber risk
in aggregate. Because larger nodes are attacked more frequently, the odds that large pro-
viders of various stripes will not be beaten eventually, to the detriment of their users, are
vanishingly small over a suciently large number of attacks. Likewise, at the level of com-
puter software, market concentration tends to worsen the odds that exploitable vulner-
abilities exist. Additionally, larger organisations tend to be worse at patching defects in
their software systems, creating longer windows of vulnerability.
5. Concentration and impact
If targeted and then compromised, what happens next (i.e. the fallout or impact of a cyber-
attack) is hugely variable, but likewise dependent on patterns of concentration, among other
factors. One way to conceptualise concentration and scale is to think of big organisations as
being highly connected, in the sense that what happens to the big organisations ripples out
and aects many others. A diuse system is one with isolated pockets, without, in network
theory terms, connecting edges to other nodes. In highly concentrated systems, everything
is connected,especially to the big nodes (e.g. organisations or platforms) in the system. Such
interconnections can take a number of forms, but readily include links between organis-
ations via data sharing agreements, common supply chains and third party vendors. Inter-
connections create transmission pathways by which the negative consequences of a
security incident can percolate through a whole system (Geer 2018a).
Content delivery networks (CDNs) are a good example of how a cluster of users on a
single service can amplify the potential impact of major security incidents. CDNs move
content closer to the edge nodes of the network and allow users to both get better
service quality and improved protections from distributed denial of service (DDoS)
attacks. Because CDNs can leverage resources in a coordinated way, they tend to be
more ecient than a loosely interlinked group of, say, individual website operators who
are all providing their own bandwidth to handle incoming trac. That eciency means
that there are fewer system-wide resources to handle big malicious events than would
otherwise be the case, but the CDN can leverage the available resource in a more coordi-
nated way, bringing more capacity to bear at any given point in time.
1
Eectively, CDNs,
like AV vendors before, can sometimes reduce individual user vulnerability by leveraging
their increased scale in a coordinated way.
But when the CDN or other major Internet node fails, the eect of concentration on the
impact element of the cyber risk equation can be large indeed. Take the 2016 attack on
Dyn, a domain name system provider, as an example. The attack was well into the terabyte
per second range (the largest attack to date is a 1.3 tbs attack on Github), but still rep-
resents only around one-hundredth of the latent DDoS potential worldwide (Leverett
and Kaplan 2017). The attack temporarily disrupted the services of some 69 dierent Inter-
net-based companies, ranging from AirBnB to Zillow, but including such big named
brands as Netix, Spotify, HBO, Fox News and Twitter. Cumulatively, the economic cost
of the attack was huge. (The more interconnected, the more counterparty risk matters.)
JOURNAL OF CYBER POLICY 21
The root of the problem in the Dyn case was that these companies all used Dyns ser-
vices to resolve their DNS queries (this example showcases concentration in name space,
since the DNS itself is a distributed protocol and Dyn, to a lesser degree, maintains a geo-
graphically distributed infrastructure). When this concentrated Goliath fell, the ripple
eects were large indeed. Consider for a moment the counterfactual, where each
company maintains its own DNS services. The system would be far less ecient and
less able to handle major trac requests. Yet when one point in the system failed, the
scope of the potential damage or disruption (impact, to use the cyber risk term) would
be minimal. AirBnb might fall for a period if hit by a 1.3 tbs attack, but this would not
inuence the provision of service by Spotify, HBO or any other online services. Decentra-
lisation, as opposed to concentration, can limit the potential impact that any given cyber
attack might have by precluding common-mode failure.
Lest we be fooled, the adverse eects of market concentration on the impact com-
ponent of cyber risk are not conned to online services. Operating systems are another
clear example. Microsoft is the dominant OS used in most PCs attached to personal and
government networks. When a new zero day for Microsoft hits the wild, the eects can
be large. The WannaCry attack in May 2017 is one such example. The malware made
use of the Eternal Blue exploit, which was reportedly found by the National Security
Agency (NSA), leaked online by the Shadow Brokers, and then used by some unknown
party to launch a globe-spanning ransomware attack.
The exploit targeted systems running an unpatched version of Microsoft 7, 8 and XP.
Over the span of a weekend, the attack spread to well over a hundred countries and
aected some 200,000 devices, with the National Health Services in the United
Kingdom getting hit particularly badly. The attack was aborted early by the actions of a
single computer enthusiast/hacker named Marcus Hitchens, who accidently stopped the
attack by registering the domain name kill switch, preventing its further spread (Solon
2017). WannaCrys spread in response to a single exploit once again highlights the poten-
tial eects of market concentration on the impact component of cyber risk. When every-
one is on the same system, not only does that system attract the bulk of malicious attacks,
making it likely to fail over a large enough volume of attacks, but the potential impact of a
successful attack moves from a private tragedy to a system-wide, cascading and poten-
tially signicant event.
Indeed, the problems of scale and interconnection are not as idiosyncratic as these
examples might suggest. More systematic evidence suggests that (a) security events that
come to involve multiple parties are disproportionally costly compared to single organis-
ation events; and (b) larger organisational size tends to generate more pronounced down-
stream ripple eects, suggesting that the more big organisations there are in a sector, the
more that knock-on eects ought to be expected (Riskrecon and Cyentia Institute 2019).
The median cost of a breach of a single organisation is $77,200, with no downstream or
upstream ripple eects. The median cost of a multiparty event is $999,500. On a frequency
distribution, the extreme end cost (95%) for a single organisation is $16,000,000, while the
same 95 per cent event involving multiple organisations through downstream and
upstream interconnections is $417,362,204 (Riskrecon and Cyentia Institute 2019). The com-
promise of multiple organisations should obviously cost more than the breach of a single
rm, but the cost of ripple eects due to interconnection is out of proportion with the
number of total victims. The ratio of downstream to originating organisations aected by
22 D. GEER ET AL.
a security incident in a multiparty breach is 8-to-1. The ratio of median costs for a single vs
multiple party incident, however, is 13-to-1 (Riskrecon and Cyentia Institute 2019).
Big nodes measured by companies with more employees in this case also create
bigger downstream ripple eects. An ecosystem with many big organisations is, there-
fore, likely to suer from far more and far costlier security incidents as the eects ripple
out across the various interconnected links, spreading from a central organisation to
third party vendors, downstream clients and so forth. For example, while companies
of every size as measured by the number of employees can cause a multiparty security
incident, larger rms are more often the source of a multiparty security event, while
small-to-medium sized enterprises are usually the recipients of such events, as detailed
in Figure 4 (Riskrecon and Cyentia Institute 2019).
Concentration, in other words, negatively aects the impact component of the cyber
risk equation. Bigger organisations or platforms have lots of users who can all be simul-
taneously aected by a security incident. Bigger organisations also tend to be more inter-
connected with others via supply chains, third party vendors and other service providers.
In either case, concentration to the extent that it denotes greater scale and interconnec-
tion tends to create the scene for a disproportionately pronounced fallout from a security
incident.
6. Contending with cyber risk in highly concentrated markets
Concentration inuences all three components of the cyber risk equation: threat, vulner-
ability and impact. It plausibly causes a redistribution of risk that aects the threat com-
ponent, shifting attacks from infrequently used nodes towards the larger hubs of
activity. It also plausibly improves individual security outcomes in the median case by
transferring the work of security from individuals to large hubs with advantages of profes-
sionalism and scale. These hubs can leverage their scale to produce more security but are
at risk of being undone by being targeted so much that even their improved protections
are simply overwhelmed. Lastly, larger hubs increase the odds of signicant failures, wor-
sening the potential negative outcome of a cyberattack.
Figure 4. Big organisations and the multiparty ripple eects of security incidents.
JOURNAL OF CYBER POLICY 23
The cumulative result of this analysis is fairly simple. Many forms of market concen-
tration, from protocols and code to links, users, services, rms or platforms, both shifts
and aggravates cyber risk. If that is the case, then the issue of cyber risk becomes an
increasingly relevant concern as more of societys functions transition to online-only. If
the Internet does eventually become, as former Swedish Prime Minister Carl Bildt put it,
the infrastructure of all other infrastructures, then trends in market concentration
present some very real, society-wide cybersecurity challenges (Bildt 2015).
Norm building eorts aimed at preventing malicious use of the Internet and IT systems
in the rst place can help manage risk (Governance 2015; Bildt and Smith 2016; Cyber-
space 2019). More precisely to the issue of concentration, remedial steps, at their core,
advance along four lines:
First, cyber risk can be managed only to the extent that it can be measured. Models of
managing cyber risk at the level of an individual rm suggest that expenditure should not
exceed 37 per cent of the expected costs of a security incident (Gordon and Loeb 2002;
Gordon and Loeb 2006; Baryshnikov 2012; Geer 2015; Gordon et al. 2015). Yet, accurately
measuring cybersecurity events is hugely challenging, as measures often suer from pro-
blems of the denominator (Jardine 2015,2018,2020), incompatible metrics (Brecht and
Nowey 2013), insucient attention to over time trends (Geer and Jardine 2017; Jardine
2020), measures distorted by political or economic incentives (Anderson et al. 2008; Ander-
son et al. 2013; Lawson and Middleton 2019), a lack of data transparency necessitating
clever measurement techniques (Woods, Moore, and Simpson 2019), reporting biases
(Florêncio and Herley 2011) and data aggregation problems (Jardine 2017b; Jardine
2020). Issues of technological ux likewise present a challenge where past data might
supremely fail to predict future events. The task of producing better measures is not
easy, but it is fundamental. Good risk mitigation policies require a concerted eort to
better measure every aspect of the Internet ecosystem in a way that at least allows for
the potential for data-driven policy.
Second, countering growing cyber risk that arises from trends toward greater market
concentration requires the deliberate development of ecosystem diversity, even though
such a process often runs afoul of the sound economic logic pushing towards greater
scale and centralisation (Coase 2012; Geer 2018a). Redundant systems in large clusters
are not sucient. Redundancies understood as more of the same will not help protect
against the cyber risks that follow from greater market concentration. Ten thousand
redundant systems all running Microsoft Windows or Apache Struts can fall like
dominos when these systems are compromised. Like in nature, diversity can increase resi-
liency and prevent cascading failures. Diversity of systems might introduce ineciencies
that hurt the bottom line, but they protect against costly large-scale failures. Promoting
diversity, especially for critical services, is likely key if cyber risk is to avoid reaching or
potentially pull back from a level where an Internet-equivalent of an extinction event is
possible. Concrete methods for promoting diversity of systems could range from anti-
trust legislation in an extreme to tooling government or corporate procurement of IT tech-
nologies and services to prevent homogeneity of providers/platforms.
Third, calculations of cyber risk insurance need to factor in ecosystem-wide trends in
market concentration in order to accurately price risk, especially to the extent that cyber
risk exhibits features that are distinct from other risk types (Biener, Eling, and Wirfs 2015).
While organisational factors (such as corporate revenue and assets) are a common
24 D. GEER ET AL.
feature of cyber risk insurance policies (Romanosky et al. 2019), each rm remains nested
in an sector with distinct patterns of market concentration. The way in which
proportional organisational size interacts with malicious actor incentives suggests that
accurate insurance pricing requires knowing not just how big in absolute terms a pro-
spective policyholder might be, but also how large of a share of the ecosystem
(sector/industry) they represent. In highly concentrated spaces, premiums for the
main nodes should be disproportionately high and payments for others comparatively
low, everything else being equal. In more competitive markets, premiums weighted
for asset value and organisational security procedures should trend towards a median
value. Yet, issues such as the over-supply of insurers can inhibit eective security govern-
ance through insurance pricing mechanisms (Woods and Moore 2019;Woods,Moore,
and Simpson 2019).
Fourth, for certain systems, measurement to manage risk and diversity might be insu-
cient and persistent disconnectedness might be needed (Geer 2018a). Interconnection in
many aspects of life may increase risk, yet do so within manageable bounds. Certain
aspects of a countrys critical national infrastructure, however, is likely too important to
interconnect, even if such connection is done with the best security standards in mind.
The implication, really, is that high impact events that aect many due interconnection
and concentration might be limited via disconnected rebreaks. Senate Bill 79, which
was recently included in the National Defense Authorization Act, proposes just such a
move in the United States with regards to US energy infrastructure. To mitigate risk,
certain parts of a nations infrastructure might be best left othe proverbial table.
7. Conclusion
In sum, thereis an often malign association between trends inmarket concentration and the
location and level of cyber risk within the system. Much of these patterns are not unique to
cyberspace and could aect other facets of risk in nance and other sectors. Market concen-
tration aects all three elements of the cyber risk equation, as it does to a degree in a host of
other sectors, but particularly the nancial sector (Gürtler, Hibbeln, and Vöhringer 2010;
Taleb 2010; Mandelbrot 2013; Zhang et al. 2013; Kasman and Kasman 2015;Dhaliwal
et al. 2016). Via the threat component, trends in market concentration redistribute risk, chan-
ging who gets targeted by a malicious actor in the rst place. Everything else being equal,
attackers tend to target the concentrated hubs because that is where the biggest reward
resides. Via the vulnerability component, shifting market concentration levels can both
reduce individual risk to a degree, but also increase systemic vulnerability because big ser-
vices (players) are targeted so much that they eventually fail and larger organisations also
exhibit the worst time-to-patch rates, leaving them and their users vulnerable for longer.
Finally, via the impact component of the risk equation, concentrated nodes can mask
unseen interdependencies that can give rise to massive, cascading failures.
Imagine a hypothetical example of perfect market concentration. All users cluster onto
one platform. All attacks ow at that platform. Eventually that platform fails, and all users
are compromised. This extreme example is a dierence in degree, rather than a dierence
in kind, from the events that unfold due to current trends in market concentration. The
implication is that countering trends in market concentration, a process that means at
one level dealing with how people freely choose to use services and platforms (Barabási
JOURNAL OF CYBER POLICY 25
and Albert 1999; Barabási 2014; Jardine 2017a), is essential for managing cyber risk on the
Internet. Conversely, attempting to deal with cyber risk without also contending with
trends in market concentration are likely to fail.
Note
1. We are grateful to Richard Willey at Akamai for helping me think this issue through.
Disclosure statement
No potential conict of interest was reported by the author(s).
Funding
This paper is part of a wider project, Incrementally Tailoring a Better Cyber Risk Score, funded by a
Comcast Innovation Grant. Grant number: 2019-145.
Notes on contributors
Dan Geer is a computer security analyst and risk management specialist. He is recognised for raising
awareness of critical computer and network security issues before the risks were widely understood,
and for ground-breaking work on the economics of security.
Eric Jardine is an assistant professor of political science at Virginia Tech and a research fellow at the
Centre for International Governance Innovation (CIGI). His research focuses on the uses and abuses
of the Dark Web, the measurement of trends in cybercrime data, and the inherent politics surrounding
both anonymity-granting technologies and encryption. He has given a Tedx talk on emergent privacy
challenges of the digital age, a congressional staer brieng on the Dark Web and cryptocurrencies,
and spoken numerous times at the United Nations Conference on Trade and Development about
trends in online trust. His scholarly work has been published in a number of peer reviewed outlets,
including New Media & Society, Journal of Cyber Policy, First Monday, Intelligence and National Secur-
ity, Terrorism and Political Violence, and Studies in Conict and Terrorism, among numerous others. He
is the co-author, with Fen Hampson, of Look Whos Watching: Surveillance, Treachery and Trust Online.
More information can be found at his website, www.measuringcyber.com
Éireann Leverett once found 10,000 vulnerable industrial systems on the internet. He then worked
with Computer Emergency Response Teams around the world for cyber risk reduction. He likes
teaching the basics, and learning the obscure. He continually studies computer science, cryptogra-
phy, networks, information theory, economics, and magic history. He is also fascinated by zero
knowledge proofs, rmware and malware reverse engineering, and complicated network eects
such as Braessand Jevons Paradoxes. He has worked in quality assurance on software that runs
the electric grid, penetration testing, and academia. He likes long binwalks by the hexdumps with
his friends. Éireann Leverett is a regular speaker at computer security conferences such as FIRST,
BlackHat, Defcon, Brucon, Hack.lu, RSA, and CCC; and also a regular speaker at insurance and risk con-
ferences such as Society of Information Risk Analysts, Onshore Energy Conference, International
Association of Engineering Insurers, International Risk Governance Council, and the Reinsurance
Association of America. He has been featured by the BBC, The Washington Post, The Chicago
Tribune, The Register, The Christian Science Monitor, Popular Mechanics, and Wired magazine.
ORCID
Eric Jardine http://orcid.org/0000-0002-2041-314X
Eireann Leverett http://orcid.org/0000-0001-6586-7359
26 D. GEER ET AL.
References
Ablon, Lillian, and Andy Bogart. 2017.Zero Days, Thousands of Nights: The Life and Times of Zero-day
Vulnerabilities and Their Exploits. Santa Monica, CA: RAND Corporation. https://www.rand.org/
pubs/research_reports/RR1751.html.
Anderson, Ross, Chris Barton, Rainer Bohme, Richard Clayton, Micheal Van Eeten, Michael Levi, Tyler
Moore, and Stefan Savage. 2013.Measuring the Cost of Cybercrime.In The Economics of
Information Security and Privacy, edited by Rainer Bohme, 265301. New york: Springer.
Anderson, Ross, Rainer Bohme, Richard Clayton, and Tyler Moore. 2008.Security Economics and
European Policy.Workshop on the Economics of Information Security, Hanover, New Hampshire.
Arce, Daniel G. 2018.Malware and Market Share.Journal of Cybersecurity 4 (1), doi:10.1093/cybsec/
tyy010.
AV Test Institute. 2019.Malware.https://www.av-test.org/en/statistics/malware/
Barabási, Albert-László. 2014.Linked: How Everything is Connected to Everything Else and What it
Means for Business, Science, and Everyday Life. New York: Basic Books.
Barabási, Albert-László, and Réka Albert. 1999.Emergence of Scaling in Random Networks.Science
286 (5439): 509512. doi:10.1126/science.286.5439.509.
Baryshnikov, Yuliy. 2012.IT Security Investment and Gordon-Loebs 1/e Rule.WEIS, Berlin, Germany.
Biener, Christian, Martin Eling, and Jan Hendrik Wirfs. 2015.Insurability of Cyber Risk: An Empirical
Analysis.The Geneva Papers on Risk and Insurance - Issues and Practice 40 (1): 131158.
doi:10.1057/gpp.2014.19.
Bildt, Carl. 2015.Why Technology, Not Geography, Is Key to Cybersecurity.HuPost.
Bildt, Carl, and Gordon Smith. 2016.The one and Future Internet.Journal of Cyber Policy 1 (2):
142156.
Bouveret, Antoine. 2018.Cyber Risk for the Financial Sector: A Framework for Quantitative Assessment.
Washington, DC: International Monetary Fund.
Brecht, Matthias, and Thomas Nowey. 2013.A Closer Look at Information Security Costs.In The
Economics of Information Security and Privacy, edited by Rainer Bohme, 325. New York: Springer.
Coase, Ronald Harry. 2012.The Firm, the Market, and the Law. Chicago, IL: University of Chicago press.
Coburn, Andrew, Eireann Leverett, and G. Woo. 2019.Solving Cyber Risk: Protecting Your Company and
Society. Hoboken, New Jersey: John Wiley & Sons, Inc.
Cox, Anthony. 2008.Whats Wrong with Risk Matrices?Risk Analysis 28 (2): 497512. doi:10.1111/j.
1539-6924.2008.01030.x.
Cyberspace, Global Commission on the Stability of. 2019. Advancing Cyberstability.
Dhaliwal, Dan, J. Scott Judd, Matthew Sering, and Sarah Shaikh. 2016.Customer Concentration Risk
and the Cost of Equity Capital.Journal of Accounting and Economics 61 (1): 2348.
Florêncio, Dinei, and Cormac Herley. 2011.Sex, Lies and Cyber-crime Surveys.Workshop on the
Economics of Information Security, Washington, DC.
Geer, Daniel E. 2015.For Good Measure: The Denominator.Login 40 (5): 7174.
Geer, Dan. 2018a. A Rubicon. In Aegis Series Hoover Institution.
Geer, Daniel E. 2018b.Trading Places.IEEE Security & Privacy 16 (1): 104104.
Geer, Daniel E. 2019.For Good Measure: Curves of Error.Login 44 (2): 5355.
Geer, Dan, and Eric Jardine. 2017.Cybersecurity Workload Trends.Login 42 (1): 6366.
Gordon, Lawrence A, and Martin P Loeb. 2002.The Economics of Information Security Investment.
ACM Transactions on Information and System Security (TISSEC) 5 (4): 438457.
Gordon, Lawrence A., and Martin P. Loeb. 2006.Managing Cybersecurity Resources: a Cost-Benet
Analysis. New York: McGraw-Hill.
Gordon, Lawrence A., Martin P. Loeb, William Lucyshyn, and Lei Zhou. 2015.Increasing
Cybersecurity Investments in Private Sector Firms.Journal of Cybersecurity 1 (1): 317. doi:10.
1093/cybsec/tyv011.
Governance, Global Commission on Internet. 2015.Toward a Social Compact for Digital Privacy and
Security: Statement by the Global Commission on Internet Governance. London: Chatham House.
Gürtler, Marc, Martin Thomas Hibbeln, and Clemens Vöhringer. 2010.Measuring Concentration Risk
for Regulatory Purposes.Journal of Risk 12: 69104.
JOURNAL OF CYBER POLICY 27
Herr, Trey, Bruce Schneier, and Christopher Morris. 2017.Taking Stock: Estimating Vulnerability
Rediscovery. Cambridge, MA: Belfer Center.
Hirschman, Albert O. 1964.The Paternity of an Index.The American Economic Review 54 (5): 761762.
Hirschman, Albert O. 1980.National Power and the Structure of Foreign Trade. Vol. 105. Berkeley, CA:
Univ of California Press.
Hubbard, Douglas W., and Richard Seiersen. 2016.How To Measure Anything In Cybersecurity Risk.
Jardine, Eric. 2015.Global Cyberspace is Safer Than you Think: Real Trends in Cybercrime.Global
Commission on Internet Governance Paper Series (16): 122. https://www.cigionline.org/sites/
default/les/no16_web_0.pdf.
Jardine, Eric. 2017a.“‘Something is Rotten in the State of Denmark:Why the Internets Advertising
Business Model is Broken.First Monday.doi:10.5210/fm.v22i7.7087.
Jardine, Eric. 2017b.Sometimes Three Rights Really Do Make a Wrong: Measuring Cybersecurity and
Simpsons Paradox.16th Annual Workshop on the Economics of Information Security, La Jolla,
California.
Jardine, Eric. 2018.Mind the Denominator: Towards a More Eective Measurement System for
Cybersecurity.Journal of Cyber Policy 3 (1): 116139. doi:10.1080/23738871.2018.1472288.
Jardine, Eric. 2020.Taking the Growth of the Internet Seriously When Measuring Cybersecurity.In
Researching Internet Governance: Methods, Frameworks, Futures, edited by Laura DeNardis, Derrick
Cogburn, Nanette Levinson, and Francesca Musiani. Cambridge: The MIT Press.
Jardine, Eric. n.d.The Case Against Commercial Antivirus Software: Risk Homeostasis and
Information Problems in Cybersecurity.Unpublished Working Paper, Blacksburg, VA.
Johnson, Steven. 2001.Emergence: the Connected Lives of Ants, Brains, Cities, and Software. New York:
Scribner.
Kasman, Saadet, and Adnan Kasman. 2015.Bank Competition, Concentration and Financial Stability
in the Turkish Banking Industry.Economic Systems 39 (3): 502517.
Kenna Security and Cyentia Institute. 2019.Prioritization to Prediction Volume 3: Winning the
Remediation Race.
King, Gary, Robert O. Keohane, and Sidney Verba. 1994.Designing Social Inquiry: Scientic Inference in
Qualitative Research. Princeton: Princeton University Press.
Konkel, Frank R. 2018.Pentagon Thwarts 36 Million Email Breach Attempts Daily.Nextgov.https://
www.nextgov.com/cybersecurity/2018/01/pentagon-thwarts-36-million-email-breach-attempts-
daily/145149/.
Lawson, Sean, and Michael K. Middleton. 2019.Cyber Pearl Harbor: Analogy, fear, and the framing of
cyber security threats in the United States, 19912016.2019. doi:10.5210/fm.v24i3.9623.
Leverett, Eireann, and Aaron Kaplan. 2017.Towards Estimating the Untapped Potential: A Global
Malicious DDoS Mean Capacity Estimate.Journal of Cyber Policy 2 (2): 195208. doi:10.1080/
23738871.2017.1362020.
Mandelbrot, Benoit B. 2013.Fractals and Scaling in Finance: Discontinuity, Concentration, Risk. Selecta
Volume E. New York, NY: Springer Science & Business Media.
Metcalfe, Robert. 1995.Metcalfes Law.Infoworld 17: 4053.
Metcalfe, Bob. 2013.Metcalfes Law After 40 Years of Ethernet.Computer 46 (12): 2631.
Newman, Lily Hay. 2017.Equifax Ocially Has No Excuse.Wired.
ODonnell, Adam. 2008.When Malware Attacks (Anything but Windows).IEEE Security & Privacy
6 (3), 6870.
Ozment, Andy, and Stuart E. Schechter. 2006.Milk or wine: does software security improve with
age?Proceedings of the 15th conference on USENIX Security Symposium - Volume 15,
Vancouver, B.C., Canada.
Rhoades, Stephen A. 1993.The Herndahl-Hirschman Index.Federal Reserve Bulletin 79: 188.
Riskrecon and Cyentia Institute. 2019.Ripples Across the Risk Surface: A Study of Security Incidents
Impacting Multiple Parties. Riskrecon and Cyentia Institute.
Romanosky, Sasha, Lillian Ablon, Andreas Kuehn, and Therese Jones. 2019.Content Analysis of
Cyber Insurance Policies: How do Carriers Price Cyber Risk?Journal of Cybersecurity 5 (1),
doi:10.1093/cybsec/tyz002.
28 D. GEER ET AL.
Solon, Olivia. 2017.Marcus Hutchins: Cybersecurity Experts Rally Around Arrested WannaCry hero.
The Guardian.https://www.theguardian.com/technology/2017/aug/11/marcus-hutchins-arrested-
wannacry-kronos-cybersecurity-experts-react.
Sukwong, Orathai, Hyong S. Kim, and James C. Hoe. 2011.Commerical Antivirus Software
Eectiveness: An Empirical Study.IEEE: Computer Society.
Taleb, Nassim Nicholas. 2010.The Black Swan: The Impact of the Highly Improbable. New York:
Random House Trade Paperbacks.
Woods, Daniel, and Tyler Moore. 2019.Does Insurance have a Future in Governing Cybersecurity?
IEEE Security and Privacy Magazine.
Woods, Daniel, Tyler Moore, and A. Simpson. 2019.The County Fair Cyber Loss Distribution: Drawing
Inferences from Insurance Prices.Workshop on the Economics of Information Security.
Zhang, Jianhua, Chunxia Jiang, Baozhi Qu, and Peng Wang. 2013.Market Concentration, Risk-Taking,
and Bank Performance: Evidence from Emerging Economies.International Review of Financial
Analysis 30: 149157.
Zhang, Xing-Zhou, Jing-Jie Liu, and Zhi-Wei Xu. 2015.Tencent and Facebook Data Validate
Metcalfes Law.Journal of Computer Science and Technology 30 (2): 246251.
Zittrain, jonathan. 2008.The Future of the Internet and How to Stop It. New Haven: Yale University
Press.
JOURNAL OF CYBER POLICY 29
... Cyber-psychological tricks are tactics used by cybercriminals to manipulate human behavior and exploit psychological principles to achieve their goals, such as stealing personal information, spreading malware, or conducting scams (Hooks, et al., 2022;Zende, 2022) [16,36] . These tricks often play on emotions, social dynamics, and cognitive biases to induce fear, urgency, trust, or compliance (Geer, Jardine, and Leverett, 2020) [13] . Understanding these cyber-psychological tricks is essential for individuals and organizations to recognize manipulative tactics and develop effective strategies for cybersecurity (Zwilling et al., 2022;Liu et al., 2022) [38,22] . ...
... Cyber-psychological tricks refer to tactics that exploit human psychology to manipulate individuals into taking actions that compromise their security or reveal sensitive information (Liu et al., 2022) [22] . These tricks are often employed in various cyber-attack methods, including phishing, social engineering, and scams (Geer, Jardine, and Leverett, 2020) [13] . Here are some common psychological tricks used in cyber contexts including urgency and scarcity when attackers create a sense of urgency or scarcity to prompt quick action without careful consideration. ...
... Moreover, we can mitigate these fear tricks by establishing the incident response plan by developing a robust incident response plan to address potential threats, minimizing unnecessary fear (Hughes-Larteya, et al., 2021; Alghamdi, 2020) [18,3] . The authority trick can be mitigated by improving the verification protocols by encouraging the users to independently verify the authority of the sender, such as checking official websites or contacting organizations directly, training the users about common tactics used by scammers to impersonate authority figures, including the use of fake email addresses, and follows the chain of command by establishing clear protocols for escalating concerns or verifying requests from authoritative figures (Inkster et al., 2023; Geer, Jardine and Leverett, 2020) [19,13] . The social proof can be mitigated by teaching the users to critically evaluate claims of popularity or social endorsement, questioning the authenticity of testimonials, helping the users to understand that social proof can be manipulated and encouraging them to seek independent reviews or verification and increasing the awareness campaigns by run campaigns to inform users about the potential for fake endorsements and the importance of independent research (Alghamdi, 2020; Rizk and Elragal, 2020) [3,29] . ...
... Avoidance can be demonstrated, for instance, by sending spam as an attached image rather than words. In computer vision systems, for instance, deeper methods entail modifying the model and its properties in order to fool it [7]. Assumption of poisoning of training data is central to the Poisoning Attack. ...
Conference Paper
In cybersecurity, machine learning is becoming more and more common. Machine learning has many potential applications in cybersecurity, one of which is to replace human malware detection methods. Making them more actionable, scalable, and effective is the aim. In the subject of cybersecurity, there are machine learning difficulties that require effective theoretical and systematic solutions. Numerous statistical and machine learning methods have proven effective in lessening the damage caused by cyberattacks. Some examples of these methods are Bayesian classification, deep learning, and support vector machines. In order to thwart such attacks, intelligent security system design must be able to spot previously unseen insights and patterns in network data. The next step in protecting against these types of attacks is to construct a data-driven machine learning model. Because of the increased development and deployment of complex analytics solutions, machine learning (ML) algorithms are being exploited by novel theft attempts to achieve a high success rate and do significant harm. Governments, organisations, and people should prioritise ML-based theft attacks due to the urgency and difficulty of detecting and protecting against them. This review covers the most up-to-date information on this emerging attack type and the defences put in place to stop it. This article examines the ML-based theft threat from the vantage point of three types of controlled information: authentication data, data pertaining to ML models, and data pertaining to restricted user behaviours. This paper reviews recent research in order to draw conclusions on the strengths and weaknesses of ML-based theft attacks, as well as their potential future applications. In addition, steps to build robust defences against discovery, disruption, and isolation are suggested.
... This claimed figure increased by around 100% in 2012, reaching over 100 million. The security industry discovered more than 900 million malicious executables in 2019, according to AV-TEST statistics, and that figure is consistently increasing [2]. People and companies alike are vulnerable to cybercrime and network assaults, which may result in substantial financial losses [3]. ...
Article
Full-text available
Cybersecurity is rapidly embracing ML. Integrating ML into cybersecurity mainly aims to improve the effectiveness, scalability, and actionability of malware detection compared to more conventional approaches that depend on human intervention. Problems with ML need well-managed theoretical and methodical approaches in the cybersecurity sector. The increasing prevalence of cyber threats necessitates effective strategies for malware detection within cybersecurity frameworks. Using the EMBER v2017 dataset-this study intends to develop and assess ML methods for malware attack detection and classification. This research used machine learning classification algorithms Neural Network (NN), Random Forest (RF), and SVM (Support vector machine) and evaluated the performance of these models in terms of F1 score, precision, accuracy, and recall. The Neural Network model exceeds the others, with an accuracy of 97.53% and a precision of 98.85%, whereas RF has a lesser accuracy of 84.3%. These findings underscore the importance of using powerful machine-learning techniques to improve cybersecurity safeguards against emerging threats. The work contributes to the field by providing a detailed examination of the performance of several malware detection techniques, as well as recommendations for future research and practical cybersecurity applications.
... Nowadays, cybercriminal activities are inflicting substantial economic losses on various industrial and government sectors. According to live data from the AV-TEST Institute, more than 450,000 new malicious programs are detected every day, with over 1.2 billion instances of malware emerging within 2023 alone, and these numbers are still on the rise [3]. These malicious programs infiltrate personal and corporation digital systems through a wide range of vulnerabilities, posing serious threats to personal privacy, commercial secrets, and other sensitive records. ...
Preprint
Cyber-attacks are becoming increasingly sophisticated and frequent, highlighting the importance of network intrusion detection systems. This paper explores the potential and challenges of using deep reinforcement learning (DRL) in network intrusion detection. It begins by introducing key DRL concepts and frameworks, such as deep Q-networks and actor-critic algorithms, and reviews recent research utilizing DRL for intrusion detection. The study evaluates challenges related to model training efficiency, detection of minority and unknown class attacks, feature selection, and handling unbalanced datasets. The performance of DRL models is comprehensively analyzed, showing that while DRL holds promise, many recent technologies remain underexplored. Some DRL models achieve state-of-the-art results on public datasets, occasionally outperforming traditional deep learning methods. The paper concludes with recommendations for enhancing DRL deployment and testing in real-world network scenarios, with a focus on Internet of Things intrusion detection. It discusses recent DRL architectures and suggests future policy functions for DRL-based intrusion detection. Finally, the paper proposes integrating DRL with generative methods to further improve performance, addressing current gaps and supporting more robust and adaptive network intrusion detection systems.
... The subjective nature is particularly prominent, reflecting the dependence on the analyst's cybersecurity expertise and understanding of the business context. Moreover, prevalent time constraints, which are often imposed by agile practices and strict Time-To-Market (TTM) rules, add a layer of complexity, demanding security considerations to harmonize seamlessly with the accelerated rhythm of development [5]. In this context, one could say that automation in detecting both code and infrastructure vulnerabilities stands as a key challenge, highlighting the necessity for sophisticated tools capable of comprehensively identifying and addressing potential security flaws [6]. ...
Article
Full-text available
Recently, the DevSecOps practice has improved companies’ agile production of secure software, reducing problems and improving return on investment. However, overreliance on security tools and traditional security techniques can facilitate the implementation of vulnerabilities in different stages of the software lifecycle.. Thus, this paper proposes the integration of a Large Language Model to help automate threat discovery at the design stage and Security Chaos Engineering to support the identification of security flaws that may be undetected by security tools. A specific use case is described to demonstrate how our proposal can be applied to a retail company that has the business need to produce rapidly secure software.
... This is a "rob banks because that is where the money is" theory of attractiveness to malicious actors. 4 It is also commensurate with the market share theory and evidence of targeting by malicious actors, whereby platforms with a larger market share receive at least their relative share of malicious targeting (O'Donnell 2008, Garcia et al. 2014, Vasek et al. 2015, Arce 2018, Geer et al. 2020. It is also consistent with the evolution of ransomware to be a big-game-hunting phenomenon. ...
Article
This paper shows that shared security in the cloud manifests a new public goods paradigm owing to the dynamics of cloud economics and endogeneity between usage, vulnerability, and security. Users must balance the need for security with the risk of becoming locked into a particular cloud services provider (CSP) over time. CSP managers must recognize that security competition is neither a race to the bottom nor an arm’s race. Our results provide guidelines for designing a vulnerability prediction algorithm that emphasizes CSP managers’ understanding of users’ risk and time preferences. In addition, CSP managers should be aware of the nature of security technologies and associated costs, aiming to simplify security architecture and reduce future orchestration expenses. Over the long term, cloud users should consider judiciously repatriating cloud assets as taking refuge in the cloud presents a dynamic fallacy of composition.
Article
Cyber risk management involves balancing risk acceptance, avoidance, reduction, and transfer. Academic researchers have focused on risk reduction measures. Studies of cyber risk transfer are less common, mainly centering on cyber insurance. This emphasis on risk reduction overlooks the development of many real-world cyber risk transfer products in the last decade. Our study describes the emergence of products including: cyber (re)insurance, parametric insurance, warranties, and cyber cat bonds. We characterize how these solutions addressed four core challenges of transferring cyber risk: (1) tailoring coverage to the threat landscape; (2) managing solvency; (3) data collection for risk assessment; and (4) creating incentives for risk reduction. The result is an integrated history of cyber risk transfer describing how novel products and partnerships emerged to address failings in prevailing business models. Our descriptive study can help other researchers to understand real-world problems, providing a foundation for future research and a richer picture of the overall cyber risk transfer landscape, as well as a deeper understanding of the types of cyber risk that can—and cannot—be effectively transferred.
Article
We study platforms’ incentives to invest in cybersecurity under ad-funded and subscription-funded business models.
Article
Abstract Purpose Cybersecurity has received growing attention from academic researchers and industry practitioners as a strategy to accelerate performance gains and social sustainability. Meanwhile, firms are usually prone to cyber-risks that emanate from their supply chain partners especially third-party logistics providers (3PLs). Thus, it is crucial to implement cyber-risks management in 3PLs to achieve social sustainability in supply chains. However, these 3PLs are faced with critical difficulties which tend to hamper the consistent growth of cybersecurity. This paper aims to analyze these critical difficulties. Design/methodology/approach Data were sourced from 40 managers in Nigerian 3PLs with the aid of questionnaires. A novel quantitative methodology based on the synergetic combination of interval-valued neutrosophic analytic hierarchy process (IVN-AHP) and multi-objective optimization on the basis of a ratio analysis plus the full multiplicative form (MULTIMOORA) is applied. Sensitivity analysis and comparative analysis with other decision models were conducted. Findings Barriers were identified from published literature, finalized using experts’ inputs and classified under organizational, institutional and human (cultural values) dimensions. The results highlight the most critical dimension as human followed by organizational and institutional. Also, the results pinpointed indigenous beliefs (e.g. cyber-crime spiritualism), poor humane orientation, unavailable specific tools for managing cyber-risks and skilled workforce shortage as the most critical barriers that show the highest potential to elicit other barriers. Research limitations/implications By illustrating the most significant barriers, this study will assist policy makers and industry practitioners in developing strategies in a coordinated and sequential manner to overcome these barriers and thus, achieve socially sustainable supply chains. Originality/value This research pioneers the use of IVN-AHP-MULTIMOORA to analyze cyber-risks management barriers in 3PLs for supply chain social sustainability in a developing nation.
Chapter
Full-text available
A multidisciplinary book that takes internet governance research as a research subject in its own right, discussing methods and conceptual approaches. The design and governance of the internet has become one of the most pressing geopolitical issues of our era. The stability of the economy, democracy, and the public sphere depend on the stability and security of the internet. Revelations about election hacking, facial recognition technology, and government surveillance have gotten the public's attention and made clear the need for scholarly research that examines internet governance both empirically and conceptually. In this volume, scholars from a range of disciplines consider research methods, theories, and conceptual approaches in the study of internet governance. The contributors show that internet governance is not only about governments; it is enacted through technical design, resource coordination, and conflicts at various invisible control points. They discuss such topics as the emergence of “internet governance” as an area of academic study and a real-world policy arena; the scholarly perspectives of STS, the law, computer science, and political science; the use of big data and text mining in internet governance studies; and cybersecurity. The open access edition of this book was published with the support of a generous grant from the Hewlett Foundation Cyber Initiative to the Internet Governance Lab at American University ContributorsFarzaneh Badiei, Davide Beraldo, Sandra Braman, Ronald J. Deibert, Dame Wendy Hall, Jeanette Hofmann, Eric Jardine, Rikke Frank Jørgensen, Aastha Madaan, Stefania Milan, Milton Mueller, Kieron O'Hara, Niels ten Oever, Rolf H. Weber
Article
Full-text available
New cybersecurity technologies, such as commercial antivirus software (AV), sometimes fail to deliver on their promised benefits. This article develops and tests a revised version of risk homeostasis theory, which suggests that new cybersecurity technologies can sometimes have ill effects on security outcomes in the short run and little‐to‐no effect over the long run. It tests the preliminary plausibility of four predictions from the revised risk homeostasis theory using new survey data from 1,072 respondents. The estimations suggest the plausible operation of a number of risk homeostasis dynamics: (1) commercial AV users are significantly more likely to self‐report a cybersecurity event in the past year than nonusers, even after correcting for potential reverse causality and informational mechanisms; (2) nonusers become somewhat less likely to self‐report a cybersecurity event as the perceived riskiness of various e‐mail‐based behaviors increases, while commercial AV users do not; (3) the negative short‐run effect of commercial AV use on cybersecurity outcomes fade over time at a predicted rate of about 7.03 percentage points per year of use; and (4) after five years of use, commercial AV users are statistically indistinguishable from nonusers in terms of their probability of self‐reporting a cybersecurity event as perceptions of risky e‐mail‐based behaviors increase.
Article
Full-text available
Cyber insurance could achieve public policy goals for cybersecurity using private-sector means. Insurers assess organizational security postures, prescribe security procedures and controls, and provide postincident services. We evaluate how such mechanisms impact security, identify market dynamics restricting their effectiveness, and sketch out possible futures for cyber insurance as governance.
Conference Paper
Full-text available
The actuarially fair insurance premium reflects the expected loss for each insured. Given the dearth of cyber security loss data, market premiums could shed light on the true magnitude of cyber losses despite noise from factors unrelated to losses. To that end, we extract cyber insurance pricing information from the regulatory filings of 26 insurers. We provide empirical observations on how premiums vary by coverage type, amount, policyholder type, and over time. A method using Particle Swarm Optimisation is introduced to iterate through candidate parameterised distributions with the goal of reducing error in predicting observed prices. We then aggregate the inferred loss models across 6, 828 observed prices from all 26 insurers to derive the County Fair Cyber Loss Distribution. We demonstrate its value in decision support by applying it to a theoretical retail firm with annual revenue of 50M.Theresultssuggestthattheexpectedcyberliabilitylossis50M. The results suggest that the expected cyber liability loss is 428K, and that the firm faces a 2.3% chance of experiencing a cyber liability loss between 100Kand100K and 10M each year. The method could help organisations better manage cyber risk, regardless of whether they purchase insurance.
Article
Full-text available
This article presents a game-theoretic model of the interaction between malware creators (hackers) and users. Users select and hackers target information technology platforms based upon each platform’s network externalities and security. In equilibrium, a platform’s market share among users and the distribution of malware across platforms are derived endogenously. In particular, a platform’s relative market share is shown to be the square root of the ratio of its competitor’s vulnerability to its own vulnerability. This provides a useful standard for guiding a platform’s security strategy and for characterizing platform competition on the basis of security. It is also consistent with the longstanding empirical folk wisdom that platform leaders must make increasing investments into cybersecurity in order to maintain market share.
Article
During the two and a half decades leading up to the Russian cyber attacks on the 2016 U.S. presidential election, public policy discourse about cybersecurity typically framed cybersecurity using metaphors and analogies to war and tended to focus on catastrophic doom scenarios involving cyber attacks against critical infrastructure. In this discourse, the so-called “cyber Pearl Harbor” attack was always supposedly just around the corner. Since 2016, however, many have argued that fixation on cyber Pearl Harbor-like scenarios was an inaccurate framing that left the United States looking in the wrong direction when Russia struck. This essay traces the use of the cyber Pearl Harbor analogy and metaphor over the 25-year period preceding the Russian cyber attacks of 2016. It argues that cyber Pearl Harbor has been a consistent feature of U.S. cybersecurity discourse with a largely stable meaning focused on catastrophic physical impacts. Government officials have been primarily responsible for driving these concerns with news media uncritically transmitting their claims. This is despite the fact that such claims were often ambiguous about just who might carry out such an attack and often lacked supporting evidence.
Article
Data breaches and security incidents have become commonplace, with thousands occurring each year and some costing hundreds of millions of dollars. Consequently, the market for insuring against these losses has grown rapidly in the past decade. While there exists much theoretical literature about cyber insurance, very little practical information is publicly available about the actual content of the polices and how carriers price cyber insurance premiums. This lack of transparency is especially troubling because insurance carriers are often cited as having the best information about cyber risk, and know how to assess – and differentiate – these risks across firms. In this qualitative research, we examined cyber insurance policies filed with state insurance commissioners and performed thematic (content) analysis to determine (i) what losses are covered by cyber insurance policies, and which are excluded?; (ii) what questions do carriers pose to applicants in order to assess risk?; and (iii) how are cyber insurance premiums determined – that is, what factors about the firm and its cybersecurity practices are used to compute the premiums? By analyzing these policies, we provide the first-ever systematic qualitative analysis of the underwriting process for cyber insurance and uncover how insurance companies understand and price cyber risks.