Content uploaded by Suzanne Widup
Author content
All content in this area was uploaded by Suzanne Widup on Jan 05, 2016
Content may be subject to copyright.
IVERIZON ENT ERPRISE S OLUTIONS
2015 DBIR Contributors
(See Appendix C for a detailed list.)
2015 DATA BREAC H INVESTI GATIONS REP ORT II
D
E
F
E
N
S
E
S
E
C
U
R
I
T
Y
S
E
R
V
I
C
E
U
N
I
T
E
D
S
T
A
T
E
S
O
F
A
M
E
R
I
C
A
III VERIZON ENTERPRISE SOLUTIONS
CONTENTS
Introduction ........................................................................................................................................................................... 1
Victim Demographics ........................................................................................................................................................ 2
Breach Trends: “Looking Back Before Diving Ahead” ......................................................................................... 4
Before and Beyond the Breach ...................................................................................................................................... 7
Indicators of Compromise: “Sharing Is Cyber-Caring” ....................................................................................... 8
Phishing: “Attn: Sir/Madam” ......................................................................................................................................... 12
Vulnerabilities : “Do We Need Those Stinking Patches?” ................................................................................. 15
Mobile: “I Got 99 Problems and Mobile Malware Isn’t Even 1% of Them” ............................................... 18
Malware: “Volume, Velocit y, an d Variation” .......................................................................................................... 21
Industry Profiles: “Raising the Stakes with Some Takes on NAICS” .......................................................... 24
Impact: “In the Beginning, There Was Record Count” ....................................................................................... 27
Incident Classif ication Patterns ................................................................................................................................ 31
Point-of-Sale Intrusions ...................................................................................................................................... 35
Payment Card Skimmers ...................................................................................................................................... 37
Crimeware ................................................................................................................................................................... 39
Web App Attacks ..................................................................................................................................................... 41
Denial-of-Service Attacks .................................................................................................................................. 43
Physical Theft/Loss ............................................................................................................................................... 45
Insider Misuse ........................................................................................................................................................... 46
Miscellaneous Errors ............................................................................................................................................. 49
Cyber-Espionage ..................................................................................................................................................... 52
Wrap-Up ................................................................................................................................................................................. 55
Appendix A: Year in Review .......................................................................................................................................... 57
Appendix B: Methodology ............................................................................................................................................. 59
Appendix C: Contributing Organizations ............................................................................................................... 61
Appendix D: The Internet of Things .......................................................................................................................... 62
QUESTIONS?
COMMENTS?
BRILLIANT IDEAS?
We want to hear
them. Drop us a line at
dbir@verizon.com,
find us on LinkedIn,
or tweet @VZdbir
with the hashtag #dbir.
2015 DATA BREA CH INVES TIGATIONS R EPORT 1
Welcome (and welcome back), friends, to our annual showcase of security breaches. We’re so
glad you could at tend; come inside, come inside. The year 2014 saw the term “data breach”
become p art of the broader public vernacular with The New York Times devoting more than
700 ar ticles related to data breaches , versus fewer than 125 the previous year.2 It was the year
major vulnerabilities received logos (collec t them all!) an d needed PR IR firms to manage their
legions of “fans.” And it was the year when so many hig h-profile organizations met with the nigh
inevitabilit y of “the breach” that “cyber ” was front and center at the boardroom level. The real
sign of the times , however, was that our moms started asking , “Is that what you do, dear?” and
seemed to finally get what we do for a living.
The 2015 Data Breach Investig ations Report (DBIR) continues the tra dition of change with
additions that we hope will help paint the clearest picture yet of the threats, vulnerabilities,
and actions that lead to security incidents, as well as how they im pact organizations suf fering
them. In the new “Befo re and Beyond th e Breach” section, our security data scien tists analy zed
(literally) dozens of teraby tes of data from part ners new and old , making this on e of the
most collaborative, data-driven information security (InfoSec) repor ts in existence. If you’re
accustomed to reading the DBIR mainly for the headliners and one-liners, you might need to
coffee up and put your thinking cap on for this one. But it ’ll be worth it; we promise. Fret not ,
“incident pat tern” aficionados—the nefarious nine are back, but they have slimm ed down a bit, as
you’ll see when you get to that section .
Speaking of par tners, the DBIR would not be possible wit hout our 70 contribu ting organizations.
We contin ue to have a healthy mix of service providers, IR /forensic fir ms, international Co mputer
Security Informa tion Response Teams (CSIRTs), and government agencies, but have added
multiple partners from securit y industry verticals to take a look at a broad spectru m of real-
world data. Their willingness to share data and ac tionable insig ht has made our report a hallmark
of success in information sharing. For t hat, each of them3 has our respect an d gratitude.
If you’re curious about what, how, and w hy we did what you see before you, flip to Appendix B,
where we discuss sample bias, methodology, and other details of the research ef forts making
up the report. To further encourage readers to try this at home, we’ve includ ed a “How do I learn
more?” component in each relevant sec tion, which sh ould help you start or grow your own data-
driven security practices.4
1 Thes e number s are bas ed on the to tal data in t he 2015 DBI R complet e corpus . Read mor e about ou r metho dolog y in (of all plac es)
Appendix B: Methodology.
2 Sear ch terms “d ata AND br each” fo r calenda r years 2 013 and 2014 at nytimes.com/content/help/search/search/search.html.
Fun fac t: Taylor Sw ift onl y saw arou nd 400 NY T artic les for 2014.
3 Full lis t of part ners an d contr ibutor s in Appen dix C.
4 One fi nal note b efore we d ive into t he breach es: The D BIR team w ished to ma rk the pa ssing of Le onard Ni moy, as tha t event ca me
duri ng the cre ation of t his repo rt. We wil l all miss hi s humor, tal ent, and i nspira tion.
INTRODUCTION
70
CONTRIBUTING
ORGANIZATIONS
79,790
SECURITY INCIDENTS
2,122
CONFIRMED
DATA BREACHES
61
COUNTRIES
REPRESENTED1
2 VERIZON ENTERPRISE SOLUTIONS
There’s probably a decent correlation between the populations of people who read movie credits
and those who read the demographics section in a report. You might linger to be reminded of that
actress’s name who was also in tha t movie you liked years back or see the bloopers at the end of a
Jackie Chan film , but otherwise it ’s a scramble for the door before the parking lo t gets slammed .
We, however, believe demographics are rather im portant. How else would you know if the findings
are generally representative, if they’re relevant to your organization, and whether any animals
were harmed du ring the making of this report? (There weren’t, but we definitely killed some
brain cells as a team .) Such questions are important to proper interpretation and application of
everything else th at follows.
Last ye ar’s DBIR cover ed incidents a ffectin g organiza tions in 95 coun tries; the up dated tally for
the 2015 re port is 61. Thi s obviously mea ns that 34 cou ntries got se cured over the l ast year; great
job, everyone. In tr uth, we don ’t know what ’s going on the re—we have mo re contribu tors and mor e
incide nts than ever b efore. In term s of volume, tw o-thirds of inci dents occur red in the U.S. , but that ’s
more re flective of o ur contribu tor base (whic h continues to e xpand geog raphically) t han a measure of
relative threat/vulnerability.
VICTIM DEMOGRAPHICS
Figure 1.
Countri es represented in com bined
caseloa d
61
THIS YEAR’S DBIR
COVERS INCIDENTS
AFFECTING
ORGANIZATIONS IN
61 COUNTRIES.
2015 DATA BREA CH INVES TIGATIONS R EPORT 3
Figure 2 provides the specs for both vic tim industries5 and size ranges. Don’t give much credence
to the huge number for th e Public sector; we have many government CSIRTs participating in this
repor t, and they handle a hig h volume of incidents (many of which fall unde r regulator y reporting
requirements). The four colum ns on the right filter out the noise of these incidents—many of
which are rather mundane—by including only confirmed data breaches.
The top three industries affected are the same as previous years:
Public, Information, and Financial Services.
The ind ustries most affected look remar kably similar to p rior years, an d the top three a re exactly
the same: Public, Inf ormation, a nd Financial Servic es. Our overall ta ke from these re sults remains
consis tent as well: No in dustry is immune t o security failures . Don’t let a “that won’t happ en to
me beca use I’m too X” at titude catc h you napping. Other than th at, we’ll ref rain from fur ther
comme ntary on these demograp hics and simply e ncourage you to lo ok them over to decide how
relevan t they are to your organization and whethe r they change the way you read/use t his report .
NUMBER OF SECURITY INCIDENTS CO NFI RME D DATA L OSS
INDUSTRY TOTAL SMALL LARGE UNKNOWN TOTAL SMALL LARGE UNKNOWN
Accommodation (72)368 181 90 97 223 180 10 33
Administrative (56)205 11 13 181 27 6 4 17
Agriculture (11) 2002 2002
Construction (23) 3120 2110
Educational (61)165 18 17 130 65 11 10 44
Entertainment (71)27 17 010 23 16 0 7
Finan cial Servic es (52)642 44 177 421 277 33 136 108
Healthcare (62)234 51 38 145 141 31 25 85
Information (51)1,496 36 34 1,426 95 13 17 65
Management (55) 4022 1001
Manufacturing (31–33 )525 18 43 464 235 11 10 2 14
Mining (21)22 112 917 011 6
Othe r Services (81)263 12 2249 28 8 2 18
Professional (54)347 27 11 309 146 14 6126
Public (92)50,315 19 49,596 700 303 6241 56
Real Est ate (53)14 2 1 11 10 118
Retail (44–45)523 99 30 394 164 95 21 48
Tra de ( 42)14 10 13 6402
Transportation (48–49)44 2 9 33 22 2 6 14
Utilities (22)73 1 2 70 10 0 0 10
Unknown 24,504 14 4 124, 359 325 141 1183
TOTAL 79,790 694 50,081 29,015 2,122 573 502 1,047
5 We use th e North A merica n Indust ry Clas sifica tion Syst em (NAICS ) for codin g the vic tim indu stry. census.gov/eos/www/naics
INCIDENTS VS. BREACHES
This report uses the
following definitions:
Security incident: Any
event that compromises the
confidentiality, integrity,
or availabilit y of an
information asset.
Data breach: An incident
that resulted in confirmed
disclosure (not just exposure)
to an unauthorized part y. We
use this term interchangeably
with “data compromise” in
this report.
Figure 2.
Security i ncidents by vi ctim indust ry and
organization size
4 VERIZON ENTERPRISE SOLUTIONS
This is an annual repor t, and as such, it traditionally focuses on interesting develop ments over
the prev ious year. Some aspects of the threat space change that quickly, but others undulate and
evolve over a longer period of time. We don’t want to lose sight of either the forest or t he trees,
so before delving into updates on each incident patte rn, let’s take a look at some of the longer-
term trends and high-level findings from this year’s data.
THREAT ACTORS
Though the number of breaches per threat actor changes rather dramatically each year as we add
new par tners and more data , the overall proportion attributed to ex ternal, internal, and par tner
actors stays roughly the same. The s tream plot for Fi gure 3 demons trates this well and shows
that overall trends in the threat actors haven ’t shifted much over th e last five years.
BREACH TRENDS
Looking Back Before Diving Ahead
Figure 3.
Actor categ ories over time b y percent
of actor s
2010
2011
2012
2013
2014
0%
20%
40%
60%
80%
100%
Partner
Internal
External
Threat actors: Virtually
no change in overall
proportion attributed to
external, internal, and
partner actors.
2015 DATA BREAC H INVESTI GATIONS REP ORT 5
One of the most interesting changes in the threat actor categor y came to light when we started
looking deeper into compound at tacks (those with multiple motives). Last year, we added a
motive to the Vocabulary for Event Recording and Incident Sharing (VERIS) called “secondar y”
to better track these. We use it in combination with a primary motive to indicate that the victim
was targeted as a way to advance a different at tack against another victim . Strategic web
compromises are a good example. In these campaigns, a website is hacked to serve up malware
to visitors in hopes that the actor’s true target will become infected. The actors have no real
interest in the owner of the website other than using the owner to further the real at tack. In
this year’s data set, we found that nearly 70% of the attack s where a motive for the attack is
known include a secondary vic tim. The majority of these were not from espionage campaigns
(thank fully), but from oppor tunistically compromised servers used to participate in denial-of-
service (DoS) attacks, host malware, or be repurposed for a phishing site.
In 70% of the attacks where we know the motive for the attack,
there’s a secondary victim.
THREAT ACTIONS
Instead of hitting you with a list of all the threat actions seen this year, we thought we would
pare it down to the big movers. Back in 2010, malware was all about the keylogger, and we saw
very few examples of phishing or RAM-scraping malware being used. Fast for ward to today, and
RAM scraping has grown up in a big way. This type of malware was present in some of the most
high-profile retail data breaches of the year, and several new families of RAM scrapers aimed at
point-of-sale (POS) systems were discovered in 2014.
Phishing has also been on the rise since 2011, although the rate of grow th has slowed in the last
year. Meanwhile, venerable old keylogger malware has been in decline, having only been obser ved
in about 5% of the breaches recorded in this year’s sample.
Figure 4.
Signif icant threat act ions over time
by percent
2010 2011 2012 2013 2014
0%
20%
40%
60%
80%
100%
2013
2014
Credentials
RAM Scraper
Phishing
Spyware/Keylogger
RAM scraping has grown
in a big way. This type of
malware was present in
some of the most high-
profile retail breaches.
6 VERIZON ENTERPRISE SOLUTIONS
BREACH DISCOVERY
Figu re 5 offers a ne w twist on on e of our favorit e charts fr om the 2014 DBIR. It con trasts how of ten
attackers are able to com promise a victim in days or less (orange line) wit h how often defenders
detec t compromises within that same time f rame (teal line). Un fortunately, the proportion of
breac hes discovere d within days st ill falls well below t hat of time to com promise. Even w orse, the t wo
lines ar e diverging ove r the last decad e, indicating a g rowing “detec tion deficit” bet ween attackers
and defenders. We think it highlights one of the prima ry challenges to the securit y industry.
Unfortunately, the proportion of breaches discovered within days still
falls well below that of time to compromise.
If you’re desperate for good news, you’ ll be happy to see that 2014 boasts t he smallest def icit
ever reco rded and the trend lines appear a bit more parallel than divergent . We’ll see if that ’s a
trick or a budding trend next year.
67% 55% 55% 61% 67% 62% 67% 89% 62% 77% 45%
2004 2006 2008 2010 2012 2014
0%
25%
50%
75%
100%
% WHERE “DAYS OR LESS”
Time to Compromise
Time to Discover
Figure 5.
The defender-detection deficit
60%
IN 60% OF CASES,
ATTACKERS ARE ABLE
TO COMPROMISE AN
ORGANIZATION
WITHIN MINUTES.
2015 DATA BREA CH INVES TIGATIONS R EPORT 7
It should be obvious by now that the DBIR crew doesn’t put much stock in maintaining the
status quo. We don’t get very excited abou t just updating numbers and cranking out text. This
projec t affords us a unique opport unity to explore amazing data provided by great companies,
agencies, and organizations around the world, and we’re not keen on squandering that. We want
to learn every thing we can and t hen share our findings in the ho pe that it leads to better security
awareness, understanding, and practice for us all.
We dedicated more effort to exploring other areas that fall outside the
traditional VERIS data points.
Thus, a fter reviewing the da ta gathered for this report , we all agreed we’d be wasting a g reat
oppor tunity if we merely updated f indings for the nine incident patterns introduced last year.
We just didn ’t find many new “Aha!” discoveries to sha re with regard to those patterns, and so we
decided to trim them down and dedicate more effort to exploring other areas of the data. That
search led us to go “b efore and beyond” the breach to study things that relate to incidents in some
way, but fall o utside the traditional VERIS da ta points that d rive the pattern-based analysis . The
result is a collection of independent episodes rather than one long movie. So pop some popcorn,
get comfy, and binge-watch this season ’s advent ures.
CUE ’80s TV-SHOW THEME MUSIC
Episode 1: Indicators of Compromise: “Sharing Is Cyber-Caring”
Episode 2: Phishing: “Attn: Sir/Madam”
Episode 3: Vulnerabilities: “Do We Need Those Stinking Patches?”
Episode 4: Mobile: “I Got 99 Problems, and Mobile Malware Isn’t Even 1% of Them”
Episode 5: Malware: “ Volume, Velocit y, and Variation”
Episode 6: Industr y Profiles: “ Raising the Stakes with Some Takes on NAICS”
Episode 7: Impact: “In the Beginnin g, There Was Record Count”
Episode 8: “Internet of Things” (See Appen dix D)
BEFORE AND BEYOND THE BREACH
We looked at new data
that relates to breach
events, but goes
beyond traditional
incident reporting.
8 VERIZON ENTERPRISE SOLUTIONS
Threat intelligence indicators have become the new brass rings on the cybersecurity merry-go-
round. These precious trinkets of compromise gain increasing status as more organizations and
governments jump on th e sharing band wagon. We thoug ht we would be remiss in our duties if we
did not provide some analysis of “threat sharing” and/or “indicators of compromise” (IOC) to you,
our valued DBIR readers . We’ ll start with a bit of research performed by a new contributor to the
DBIR, Niddel.
GOTTA CATCH ’EM ALL
For the past 18 mont hs, Niddel has been collecting and analy zing open-source feeds of IP
addresses and d omain name indicators. Their goal was to evaluate a diverse array of indicators
and understa nd how these sources of information can be leveraged to provide defender s with
an asymmetr ical advantage they so desperately lack. One of the most importan t experimen ts
conducted was to deter mine the overla p between these feeds and whether or not there were any
“special snowflakes ” to be found.
Niddel combined six months of daily updates from 54 different sources of IP addresses and
domain n ames tagged as malicious by their feed ag gregators. The company then per formed a
cumulative ag gregation, meaning tha t if ever two dif ferent feeds were to mention the same
indicator throughout the six-month experimental period, t hey would be considered to be in
overlap on this specific indicator. To add some conte xt to the indicator feeds being gathered,
Niddel se parated them in two large gro ups:
• Inbound feeds that provide information on sources of scanning activ ity and
spam/phishing e-mail.
• Outbound feeds that provide information on destinations th at serve either exploit
kits or malware binaries, or provide locations of co mmand-and-control ser vers.
The results can be seen in F igure 6 (next page). We only see significant overlap on the inbound
feeds, w hich can be found on the bottom left corner of the char t. Why? Two possible a nswers are:
1. Most of these feeds are actually drawing their ag gregated feeds from the same honeypot
sources.
2. Most of the attack sources are so nontar geted that they cover the entire Internet address
space and trig ger all the dif ferent honey pots.
Given the limited use of those inbound feeds on day-to -day securit y operations (everyone gets
probed and scanned all t he time), there is an interesting pattern that appears when you are
looking at the results from the outbound feeds . Although everyone is also subjected to the same
threats, the overlap in what is repor ted on those feeds is sur prisingly small, even with a “long
exposure photogra ph” of six months’ time.
INDICATORS OF COMPROMISE
Sharing Is Cyber-Caring
Threat intelligence
indicators are the
new brass rings of
cybersecurity. But is this
threat sharing helpful?
2015 DATA BREAC H INVESTI GATIONS REP ORT 9
When biologists want to measure the population of f ish in a lake, they use a ver y simple statistical
trick to avoid counting ever y single fish in there. They will gather, say, 100 fish from the lake and
tag them, then promptly release them back to their natural habitat. Later, after they have given
the poor animals some time to recover from the trauma , they will gather samples of f ish from
different parts of the lake. The percentage of tagged f ish in each of the dif ferent parts of the
lake can be used to create a statistical measure of what percentage of fish in the lake are our
original 100 tag ged scaly heroes, thus estimating the total population in the lake.
Sadly, when you look at our malicious fish, the percentage of indicators that are unique to only
one feed over our six-month period is north of 97% for the feeds that we have sampled . And that
includes the much more overlapping inbound feeds. That means that our “malicious fish samplers”
are only encountering less than 3% of overlap across all of them.6
It is hard to draw a positive conclusion from these metrics, and it seems to suggest that if threat
intelligence indicator s were really able to help an enterprise defense st rategy, one would need to
have access to all of the feeds from all of the providers to be able to get the “best” possible coverage.
This would be a Herculean task for any organization, and given the results of our analysis, the
result would still be incomplete intelligence. There is a need for companies to be able to apply their
threat intelligence to their environment in smar ter ways so that even if we cannot see inside the whole
lake, we can forecast which parts of it are more likely to have a lot of fish we still haven’t caught.
6 This is c orrobo rated by a re cent CMU st udy: Me tcalf, L. , Spring , J. M., Bla cklist Eco system A nalysi s Update: 2 014.
resources.sei.cmu.edu/asset_files/WhitePaper/2015_019_001_428614.pdf
Although everyone is
subjected to the same
threats, the overlap
in what is reported
on outbound feeds is
surprisingly small.
INBOUND OUTBOUND
INBOUND OUTBOUND
Figure 6.
Compa rison of overlap with in
indica tor feeds
10 VERIZON ENTERPRISE SOLUTIONS
WHAT EXACTLY ARE WE SHARING?
In response to all the buzz, many diffe rent companies, plat forms, tools, schemas, and methods
have arisen to facilitate the sharing of threat intelligence. One of our new co ntributor s,
ThreatConnect, is one such example and was kind enough to connect us with some intel on intel
sharing. Using high-level data across 15 intel-sharing communities within ThreatConnect (some
comprising distinc t verticals, others a combination of regional or threat-focused participants),
we aimed to gain insight in to the types a nd level of data sharing and how these dynamics may
differ across groups.
COMMUNITY IP
ADDRESSES
E-MAIL
ADDRESSES FILES HOSTS URLS
Common Community 35.9% 1.0% 23.3% 33.0% 6.8%
Event-Based Community #1 7 7.4 % 0 .1% 2.5% 19.5% 0.5%
Industry Community #1 16.5% 32.3% 6.3% 43.0% 1.9%
Industry Community #2 47.1%4.4% 10.3% 29.4% 8.8%
Industry Community #3 8.3% 0.3% 1.2% 8 7. 5% 2.7%
Industry Community #4 25.2% 2.4% 9.0% 58.6% 4.8%
Industry Community #5 50.9% 0.7% 1.3% 22.8% 24.4%
Industry Community #6 66.4% 0.6% 14.0 % 13.8% 5.2%
Industry Community #7 59 .1% 0.5% 1.4% 23.5% 15.5%
Industry Community #8 39.6% 3.0% 7.7 % 36.9% 12.8%
Industry Community #9 51.5% 2.6% 12.6% 23.8% 9.5%
Regional Threat Community #1 49.2% 0.3%i4.5% 42.6% 3 .4%
Regional Threat Community #2 50.0% 1.1% 4.5% 30.8% 13.6%
Subscriber Community 45.4% 1.2% 18.4% 24.4% 10.6%
Threat-Based Community #1 50.3% 1.1% 11.0% 24.3% 13.3%
Of course, the volume of indicators shared overall may be dependent on a number of factors
ranging from frequency of activity, fidelity and availability of attack information, and available
resources to produce such informatio n. But aside fro m the idiosyncrasies of producers and
consumers, the variety of shared threat information may boil down to organizational maturity
and projected longevity of specific threats.
YOU HERD IT HERE FIRST
Ideally, sharing in telligence should lead to a form of “herd aler tness,” similar to the way plains
animals warn each other when predators are nearby. Thi s would seem to require tha t intelligence
must be shared at a faster rate than the spread of at tack in order to su ccessfully warn the rest of
the communit y. “ How fast is that? ” you might ask, and it’s a great question.
To look into this, we brought in a nother contributor, RiskAnalytics, which supplies net work
“shunning ” services as part of AIG’s CyberEdge cyber-insurance policies. The company leverages
the mos t-co mmonly shared threa t indicators (IPs, domains, URLs) to monitor and di stribute
attack data across its client base,7 which prov ides a good foundation for the question at hand.
75% of attacks spread from Victim 0 to Victim 1 within one
day (24 hours).
7 We have a ggre gated th e result s but are n ot discl osing th e popula tion siz e. You can alw ays ask Ris kAna lytics h ow big it s client ba se is.
Figure 7.
Frequen cy of indicator ty pes by
shari ng commun ity
Organizations would
need access to all threat
intelligence indicators in
order for the information
to be helpful—a
Herculean task.
2015 DATA BREA CH INVES TIGATIONS R EPORT 11
Based on attacks observed by Risk Analytics d uring 2014, 75% of attacks spread from V ictim 0
to Victim 1 within one day (24 hours). Over 40% hit the second org anization in less than an hour.
That pu ts quite a bit of pressure on us as a comm unity to collec t, vet, and distribute indicator-
based in telligence ver y quickly in order to maximize our collective preparedness.
BEST WHEN USED BY…
Let’s say, for the sake of arg ument, that we share indicators quickly enough to help subsequent
potential victims. The nex t thing we need to k now is how long we can expect those indicators to
remain valid (malicious, active, and wort hy of alerting/blocking). We ret urn to the Risk Analytics
data set to study that im portant qu estion.
Figure 8 shows how long most IP add resses were on the block /alert list. We split the view up into
Nidd el’s inbo und and outbou nd categories to see if t hat made a difference i n longevity. While some
hang around for a while (we re stricted th e graphic to seven days, bu t both charts have a fai rly long
tail) , most don’t l ast even a day. Unfor tunately, th e data doesn’ t tell us why they a re so short-live d, but
these findings track well with Niddel’s “cumulative uniqueness” observations.
Ultimately, the data speaks to a need for urgency: The faster you share, the more you
(theoretically) will stop. This is just one data source, though, and one that is geared toward
threats of a more oppor tunistic, high-volume, and vola tile nature (e.g., bru te forcing, web app
exploits, etc .) rather than more “low a nd slow” targeted at tacks. To test whet her these findings
apply more broadly, we’d be happy to incor porate data from a wider range of willing participants
next year. In the mea ntime, we encourage ot hers who have such data to share it . Only when we
measure our intelligence systems will we k now what they ’re really doing for us and how we
can improve them.
But the overall ta keaway would appear to be valid regardless: We need to close the gap between
sharing speed and attack speed.
CHOOSE THE WELL OVER THE FIRE HOSE
Ultimately, what is presented here is good news (organiza tions are indeed sharing). However,
we’d like to recommend that if you do produce threat intel, focus on quality as a priorit y over
quantity. Where an opportunity for detection presen ts itself, seize it in the way that offers the
greatest longevity for your e fforts. Certainly, anything that leads to the discover y of an incident
is worthwhile, but in most cases, context is key. Those consuming threat intelligence, let it be
known: An ato mic indicator has a life of its own tha t may not be shared with another. Focus less
on being led to water and work on character izing where t he well resides. E xpect more out of
your com munities, and where possible, reciprocating context ena bles a wider audience to make
additional determinations that enable a broader defensive capability.
3.5k
4.9k
3.4k
10.8k
3.2k
9.0k
2.8k
7.9k
3.5k
8.4k
6.3k
11.2k
1
2
3
4
5
6
7
DAYS ON LIST
116.0k
403.6k
Inbound
Outbound
Figure 8.
Count of in dicators by da ys observed
in at leas t one feed
We need to close the gap
between sharing speed
and attack speed.
12 VERIZON ENTERPRISE SOLUTIONS
23%
OF RECIPIENTS NOW
OPEN PHISHING
MESSAGES AND
11% CLICK ON
ATTACHMENTS.
Social engineering has a long and rich tradition outside of computer/network security, and the
act of tricking an end user via e- mail has been around since AOL installatio n CDs were in vogue.
Do you remember the “free cup holder” prank? Someone sending you an attachment that opened
your CD-ROM drive was cute at the time, bu t a premonition o f more malicious acts to come.
The first “phishing” campaigns typically involved an e-mail that appeared to b e coming from
a bank convincing users they needed to ch ange their pass words or provide some piece of
information, like, NOW. A fake web page and users’ willingness to fix the nonexiste nt problem led
to account takeovers and fraudulent transactions.
Phishing campaigns have evolved in recent years to incor porate installation of malware as the
second s tage of the attack. Lessons not learned from the silly pranks of yesteryear and the
all-but-mandatory requirement to have e-mail serv ices open for all users has made phishing a
favorite tactic of state -sponsored t hreat actor s and criminal organi zations, all with the intent to
gain an initial foothold into a net work.
In the 2013 DBIR, phishing was associated wit h over 95% of inciden ts attribu ted to state-
sponso red actors, a nd for two year s running, m ore than two-thirds of incidents tha t comprise
the Cyber-Espionage pattern have featured phishing. Th e user interac tion is not abou t eliciting
information, but for attackers to establish persistence o n user devices, set up camp, and continue
their stealthy march inside the n etwork.
For two years, more than two-thirds of incidents that comprise the
Cyber-Espionage pattern have featured phishing.
Financial motivation is also still alive and well in phishing attack s. The “old” method of d uping
people into providing their personal identif ication numbers or ba nk information is still around,
but the targets are lar gely individuals versus organizations. P hishing with the intent of device
compromise is certainly present, a nd there were hundreds of incidents in t he Crimeware section
that included phishing in the event chain. Regardless of motive, t he next section will show that
good things will come to those who bait. 8
8 If you t hink you ha ve any bet ter phi shing pu ns, let min now.
PHISHING
Attn: Sir/Madam
2015 DATA BREA CH INVES TIGATIONS R EPORT 13
ONE PHISH, TWO P HISH
In previo us years, we saw phishing messages come and go and reported that the overall
effectiveness of phishing campaigns was between 10 and 20%. This year, we noted that some of
these stats went higher, with 23% of recipients now openin g phishing mess ages and 11% clicking
on attachments. Some stats were lower, though, with a slight decline in users actually going to
phishing sites and giving up passwords.
Now, these messages are rarely sent in isolation— with some arriving faster than others. Ma ny
are sent as part of a slow and steady campaign. 9 The numbers again show that a campaign of just
10 e-mails yields a greater than 90% chance that at least one person will become the criminal’s
prey, and it ’s bag it, tag it, sell it to the butcher (or phishmonger) in the store.
How long d oes an attacke r have to wait to get th at foot in the doo r? We ag gregated the results of
over 150,0 00 e-mails sen t as part of sanctioned tests by two o f our securit y awareness partners
and measured how much time had passed from when t he message was sent to wh en the recipien t
opened it, and i f they were influenced to click or provide da ta (where the real damage is done). T he
data sho wed that nearly 50% of users ope n e-mails and click on phishing link s within the f irst hour.
The reality is that you don’t have time on your side when it comes to
detecting and reacting to phishing events.
How long do you suppose you have until the first message in the campaign is clicked? Not long at
all, wit h the median tim e to first click co ming in at one minute, 22 seconds across all campaigns.
With users taking the bait this q uickly, the hard reality is that you don’t have time on your side
when it comes to detecting and reacting to phishing events.
THERE ARE PLENTY OF PHISH IN THE SEA
We looked at organi zation demog raphics to see if one departm ent or user group was more likely
than an other to fall vic tim to phishing attacks. Depart ments such as Communications, Legal, and
Custom er Service were far more likely to ac tually open an e-mail than all other departments.
Then again, opening e-mail is a central , often mandatory component of t heir jobs.
When we studied how many p eople actually clicked a lin k after they opened the e-mail, we found
a great d eal of overlap in the conf idence inter vals for each department…which is a fancy way of
saying that we can’t say there’s a statistical difference between these departments.
9 Unles s we’re tal king abo ut a very t argete d spear-p hishing c ampai gn.
10 apwg.org/resources/apwg-repor ts
50%
NEARLY 50% OPEN
E-MAILS AND CLICK ON
PHISHING LINKS WITHIN
THE FIRST HOUR.
Figure 9.
APWG site and d omains pe r month
sin ce 2012
DOING MORE WITH LESS
The pay load for these p hishing
messa ges has to come fr om
somewhere. Data from the
Anti-Phishing Working Group
(APWG)10 sug gests tha t the
infrastructu re being used is
quite e xtensive (over 9,0 00
domain s and nearly 50 ,000
phishing URLs tracked each
mont h across the Gro up’s
membe rs). The char ts in Figur e
9 also sho w that the at tackers
have fi nally learne d a thing or
two fr om the bount y of their
enterprise breaches and may
even have a dopted a Lean
Six Sigma approach to
optimize operations.
UNIQUE DOMAINS UNIQUE SITES
0
5,000
10,000
15,000
0
20,000
40,000
60,000
MAY 12 NOV 12 MAY 13 NOV 13 MAY 14 MAY 12 NOV 12 MAY 13 NOV 13 MAY 14
COUNT
14 VERIZON ENTERPRISE SOLUTIONS
So what do we do about thi s? Hi re only robots? Bring bac k command-line mail? T here is obviously no
one-shot an tidote for the pr oblem at hand. The general areas of focu s are threefold :
• Better e-mail filtering before messages arrive in user in-boxes
• Developing and executin g an engaging and thorough securit y awareness progra m
• Improved detection and response capabilities
Taking measures to block, filter, and aler t on phishing e-mails at the gateway is preferre d, but no
technological defense is per fect, which leads us st raight to…people.
There i s some hope in thi s data in that th ree-quar ters of e-ma ils are not opene d or interac ted with. We
wonde red if there was a w ay to bump that n umber up (e.g ., by giving us ers a quick way to f lag potentia l
phishes and become a detective control), so we asked Ellen Powers, The MITRE Corpora tion’s
Infor mation Secu rity Awaren ess Program M anager, about th e effecti veness of makin g users par t of
the ac tive defense a gainst phish ing. She note d that “MITR E employees, ou r human sens or networ k,
detec t 10 % of advanced cy ber attack s that reach employee e-mail in-boxes.”
Lance Spitzner, Training Direc tor for the SANS Securing The Human program , echoes Ellen’s
sentiments, noting that “one of the most ef fective ways you can minimize the phishing threat is
throu gh effective awareness and training. Not only can you reduce the number of people that
fall victim to (potentially) less than 5%, you create a n etwork of human sensors that are more
effective at detec ting phishing a ttacks tha n almost any technolog y.”
“One of the most
effective ways you can
minimize the phishing
threat is through
awareness and training.”
—Lance Sp itzner, Training Di rector,
SANS Securi ng The Huma n
2015 DATA BREA CH INVES TIGATIONS R EPORT 15
Of all the risk fac tors in the InfoSec domain, vulnera bilities are probably the most discussed,
tracked , and assessed over the last 20 year s. But how well do we really understand them? Their
link to securit y incidents is clear enough af ter the fact, but what can we do before the breach to
improve vulnerability management programs? These are the questions on our minds as we enter
this sec tion, and Risk I/O was kind enough to join us in t he search for answers .
Risk I/O started aggregating vulnerability ex ploit data from its threat feed partn ers in late 2013.
The data set spans 200 million+ successful exploitations across 500+ Common Vulnerabilities
and Exposures (CVEs)11 from over 20,000 enterprises in more than 150 countries. Risk I/O does
this by correlating SIEM logs, analyzin g them for exploit signa tures, and pairing those with
vulnerabilit y scans of the same environments to create an aggregated picture of exploited
vulnerabilities over time. We focused on mining the patterns in the successful ex ploits to see if
we could f igure out way s to prioritize remediation and patching efforts for known vulnerabilities.
‘SPLOITIN TO THE OLDIES
In the inaugural DBIR (vintage 20 08), we made the following obser vation: For the overwhelm ing
majority of a ttacks exploiting kno wn vulnerabilities, the patch had b een available for months prior
to the breach [a nd 71% >1 year]. This stron gly suggests that a patch deploy ment strategy focusing
on coverage and consistency is fa r more effective at preventing d ata breaches than “f ire drills”
attemptin g to patch particular systems as soon as patches are released.
We decided t o see if the recen t and broade r exploit data se t still backed up t hat stateme nt. We
found t hat 99.9% of t he exploited v ulnerabilit ies had been com promised mor e than a year af ter the
associa ted CVE was pu blished. Our n ext step was t o focus on the C VEs and look at t he age of CVEs
exploi ted in 2014. Figure 10 arra nges these C VEs according t o their publica tion date and gi ves a
count o f CVEs for each y ear. A pparentl y, ha ckers really do s till part y like it’s 1999. Th e tally of really
old CVE s suggests t hat any vuln erability ma nagement pr ogram shou ld include broa d coverage of the
“oldies bu t goodies.” Jus t because a CVE g ets old doesn’ t mean it goes ou t of style wi th the exploit
crowd . And that mea ns that hang ing on to that vin tage patch colle ction makes a lo t of sense.
11 Common V ulnera bilities a nd Expo sures (CV E) is “a dic tionar y of publi cly know n infor mation se curit y vulne rabiliti es and
exposures.”—cve.mitre.org
VULNERABILITIES
Do We Need Those Stinking Patches?
99.9%
OF THE EXPLOITED
VULNERABILITIES
WERE COMPROMISED
MORE THAN A YEAR
AFTER THE CVE
WAS PUBLISHED.
10
30
50
70
90
’99 ’00 ’01 ’02 ’03 ’04 ’05 ’06 ’07 ’08 ’09 ’10 ’11 ’12 ’13 ’14
YEAR CVE WAS PUBLISHED
NUMBER OF PUBLISHED CVEs EXPLOITED
Figure 10.
Count of exp loited CVEs in 2014 by CVE
publish date
16 VERIZON ENTERPRISE SOLUTIONS
NOT ALL CVES ARE CREATED EQUAL
If we look at the fre quency of ex ploitation in Figure 11, we see a much dif ferent picture than
what ’s shown by the raw vulne rability count of Fig ure 12. Ten CVEs accou nt for almost 97 %
of the ex ploits obser ved in 2014. While that ’s a pretty amazing s tatistic, do n’t be lulled into
thinking you’ve foun d an easy way out of the vulnerability re mediation rod eo. Prioriti zation will
definitely help from a risk-cut ting perspe ctive, but beyond the top 10 are 7 million other e xploited
vulne rabilities that may need t o be ridden down . And therein , of course, lies t he challenge; once the
“mega -vulns” are roped in (assuming yo u could identif y them ahead of time), how d o you approach
addressing the rest of the horde in an orderly, comprehensive, and contin uous manner over time?
FROM PUB TO PWN
If Figure 11—along with ou r statement above from 2008 —advocates the tortoise me thod of
vulnerabilit y managemen t (slow and steady wins the race), then Figure 12 prefers the hare’s
approach. And in this version of the parable, it might just be the hare tha t’s teaching us the lesson.
Half of th e CVEs exploited in 2014 fell within two weeks. W hat’s more, the act ual time lines in
this particular data set are likely underestimated due to the in herent lag bet ween initial attack
and detection readiness (generation, deployment, and correlation of exploits/signatures).
These results undeniably create a sense of urge ncy to address publicly a nnounced critical
vulnerabilities in a timely (and comprehensive) manner. They do, h owever, beg the question:
What constitutes a “critical v ulnerabilit y,” and how do we ma ke that determination?
WHAT’S IN A SCORE, THAT WHICH WE ALL COMPOSE?
The ind ustry standard fo r rating the criticality of vu lnerabilities is CVSS, 12 which incorporates
facto rs related to ex ploitability a nd impact in to an overall base score. Figure 13 (ne xt page)
display s the CVSS score s for three dif ferent groupings o f CVEs: all CVE s analyzed (top), a ll CVEs
exploi ted in 2014 (middle), and C VEs exploite d within one mo nth of publication (bot tom). The idea
is to dete rmine which C VSS factors (if any) pop out and thus mig ht serve as a t ype of early wa rning
syste m for vulnerabilities that need quick remediat ion due to high likelihood of exploitation .
12 Th e Commo n Vulnera bility Sc oring Sy stem (CVS S) is desig ned to pro vide an op en and sta ndardi zed meth od for rat ing
IT vulnerabilities .
0%
20%
40%
60%
80%
100%
CVE−1999−0517
CVE−2001−0540
CVE−2002−0012
CVE−2002−0013
CVE−2014−3566
CVE−2012−0152
CVE−2001−0680
CVE−2002−1054
CVE−2002−1931
CVE−2002−1932
TOP 10 CVEs EXPLOITED
PERCENTAGE OF EXPLOITED CVEs
Figure 11.
Cumul ative percentage of e xploited
vulne rabilities by to p 10 CVE s
About half of the CVEs
exploited in 2014 went
from publish to pwn in
less than a month.
0%
20%
40%
60%
80%
100%
04812 16 20 24 28 32 36 40 44 48
WEEK EXPLOIT OCCURRED AFTER CVE PUBLISH DATE
PERCENTAGE OF CVE
s EXPLOITED
Figure 12.
Cumul ative percentage of e xploited
vulne rabilities by we ek(s) from CVE
publish dates
2015 DATA BREA CH INVES TIGATIONS R EPORT 17
None of the exploitability fac tors appear m uch different across the groups; it seems that just
about all CVEs have a net work access vec tor and require no authentication , so those won’t be
good predictors. The impac t factors get interes ting; the proportion of CVEs with a “complete”
rating for C-I-A13 rises ra ther dramatically as we move from all CVEs to quickly exploited CVEs.
The base score is really just a composite of the other t wo factors, but it ’s still worth noting that
most of t hose exploite d within a mont h post a score of nin e or ten. We performed some statistic al
significance tests and found some extremely low p-values, signifying that t hose differences are
meaningful rather than random variation. Even so, we agree with RISK I/O’s finding t hat a CVE
being ad ded to Metasploit is proba bly the single most relia ble predictor o f exploitation in the wild.14
Outside the CVSS score, there is one other attribute of a “critical” v ulnerability to bring up, and
this is a purely subjective observation. If a v ulnerabilit y gets a cool name in the media, it pro bably
falls into this “cr itical vulnerabilit y” label. 15 As an example, in 2014, Heartbleed, P OODLE, Scha nnel,
and San dworm were all o bserved bein g exploited wit hin a month of CVE public ation date.
In closing , we want to restate that the lesson here isn’t “ Which of these should I pa tch?” Figure
13 demonstrates the need for all those stinking patches on all your stinking systems. The real
decision is whether a given vulnerability should be patched mo re quickly than your nor mal cycle
or if it can just be pushed with the rest. We hop e this section provides some suppor t for that
decision, as well as some encouragement for more data sharing and more analysis.
13 A s all good CIS SPs know , that’s C onfid entiali ty, Integ rity, an d Availabi lity.
14 risk.io/resources/fix-what-matters-presentation
15 A s this sec tion was p enned , the “Fre ak” vul nerabi lity in SS L/TL S was discl osed. freakat tack.com
Figure 13.
CVSS attrib utes across classes of C VEs
EXPLOITABILITY IMPACT CVSS BA SE SCORE
50%
100%
50%
100%
50%
100%
ALL CVEs (n= 67,567)
Local
Adjacent
Network
Low
Medium
High
None
Single
Multiple
Complete
Partial
None
Complete
Partial
None
Complete
Partial
None
1
2
3
4
5
6
7
8
9
10
JUST EX PLOITED (n=792)
CRITICAL (exploited within one month of publication; n=24)
Access Vector
Access Complexity
Authentication
Confidentiality
Integrity
Availability
NUMBER OF CVEs
A CVE being added to
Metasploit is probably
the single most reliable
predictor of exploitation
in the wild.
18 VERIZON ENTERPRISE SOLUTIONS
The dearth of stats and trends around mobile d evices in the DBIR has been a rather obvious
void for years. It ’s kinda high on the list of e xpectations when a company named Veri zon
publishes a threat report, w hich leads to many “But what about mobility?” q uestions during any
post- presentation Q&A . But the DBIR has its roots in forensic bre ach investigations, and mobile
breaches have been few and far between over the years. Adding dozens of new contributors
didn’t change that, and we’ve come to the same data-driven conclusion year after year: Mobile
devices are not a preferred vector in data breaches. This year, however, we set our minds to
analy zing the mobile space, come cell or high water.
Before we get too fa r, let ’s just get this out of the w ay now—Android™ wins.16 Not just wins, but
Android wins so hard that most of t he suspicious ac tivity log ged from iOS dev ices was just failed
Android exploi ts. So while we’d love to compare and contrast iO S to Android, th e data is forcibly
limitin g the discussion to the latter. Also, t he malicious activit y recorded on Android is centered on
malware, and mo st of that malware is ad noyance-ware and similar resource-wasting infections.
We choppe d, sliced, and flipped the data more times than a hibachi ch ef, since we didn ’t want to
simply share a count of overall malware infections and enumerate vulnerabilities. There is already
good research in this area, and we didn’t t hink we could add much more. However, we did have
one big question when it comes to the security of mobile devices: How big of a problem is it? It’s
diff icult to attend a conference or see some top-whatever list wit hout “mobile” showing up, yet
it’s not a theme in our primary corpus, or any of our par tners’ ex ploit data.
16 In that it ’s the mo st vulne rable pla tform ; kinda li ke winnin g a free tax a udit.
MOBILE
I Got 99 Problems and Mobile Malware Isn’t Even 1% of Them
Figure 14.
Count of al l detected mobi le
malware infections
0
20,000
40,000
60,000
JUL AUG SEP OCT NOV DEC JAN
BY WEEK, 2014
NUMBER OF UNIQUE DEVICES
Our data-driven
conclusion: Mobile
devices are not a
preferred vector in data
breaches.
2015 DATA BREA CH INVES TIGATIONS R EPORT 19
To finally tr y to get an answer, we took our big question to our brethren over at Verizon
Wireless in hopes of getting data to supply an answer. They came through with a lot of data.
With our first pass through the data, we found hundreds of thousands of (Android) malware
infections, most fitting squarely in the adnoyance-ware category. In our second through
eighteenth passes, we turned the data inside out but ended up just coming back to the malware.
Finally, we stripped away the “low- grade” malware and found that the count of compromised
devices was truly negligible. The benefit of working with an internal team is that we knew how
many devices were being monitored. An average of 0.03% of smartphones per week—out of
tens of millions of mobile devices on the Verizon network—were infected with “higher-grade”
malicious code. This is an even tinier fraction than the overall 0.68% infec tion rate repor ted
17 For more i nforma tion, pl ease visi t: fireeye.com/WEB-2015RPTMobileThreatAssessment.html
18 F ireEye h as counte d 1,400 EnP ublic ap ps in the wi ld to date, b ut that n umber is g rowin g every we ek.
A BIRD’S “FIREEYE” VIEW OF MOBILE MALICIOUSNESS
We asked one of our contribu tors—FireEye —to give us its view of the vulnerabilities it
catches in various mobile platforms and applications. FireEye noted th at two main plat forms
dominate the mobile mar ket today: Google’s Android and Apple’s iOS. FireEye research ers
analy zed more tha n 7 million mobile app s on both plat forms from Ja nuary to Oc tober 2014.17
ANDROID
• 96% of mo bile malware was ta rgeted at the Android platform (which track s well with
our active malware findings in this report).
• More than 5 billion downloaded Android apps are vulnerable to remote attacks. One
significant vulnerability is known as JavaScript-Binding-Over-HTTP (JBOH), which enables
an attacker to execute code remotely on An droid devices that have af fected apps.
IOS
EnPublic apps by pass Apple’s strict review process by hijacking a process normally used to
install c ustom enter prise apps and used for b eta testing. We also foun d that 80% of En Public
apps18 invoke risky pr ivate APIs that are also i n violation of Ap ple’s Developer g uidelines. In th e
wrong hands, t hese APIs threaten user privacy and introd uce many vulnerabilities.
ADWARE
Adware is software that delivers ads to make money. While ad ware is not in itself harmful, it
often aggressively collects personal informa tion from the mobile device it’s installed on,
including name, birt h date, location, serial number, contacts, and browser bookmarks. Often,
this data is collec ted without users’ consen t. In our review, we e xamined ad libraries in
Android apps. Adware is an increasingly popular option for app publishers, growing f rom
almost 300,0 00 apps in 2013 to more than 410,00 0 in the first three quarters of 2014 alone.
Figure 15.
Count of non-adnoyance mobile
malware infections
0.03%
OUT OF TENS OF
MILLIONS OF MOBILE
DEVICES, THE
NUMBER OF ONES
INFECTED WITH TRULY
MALICIOUS EXPLOITS
WAS NEGLIGIBLE.
0
50
100
150
JUL AUG SEP OCT NOV DEC JAN
BY WEEK, 2014
NUMBER OF UNIQUE DEVICES
20 VERIZON ENTERPRISE SOLUTIONS
in the Alcatel-Lucent ’s Motive Security Labs’ biannual report.19 We should note that their data,
which is derived from the detection of malware command-and-control traffic in the mobile
network, includes Windows systems tethered to wireless devices and does not include apps
from Google Play™ that include adware. Even with those differences, Android makes up half of
that percentage and is still much larger than the 0.03% noted from our findings.
MOBILE ENIM CONFIDUNT IN (ALIQUANTO)20
Mobile devices are not a theme in our breach da ta, nor are they a theme in our par tners’ breach
and security data. We feel safe saying that while a major carrier is looking for and monitoring the
security of mobile devices on its network, data breaches involving mobile devices should not be in
any top-whatever list. This repor t is filled with thousands of stories of data loss—as it has been
for year s—and rarely do those sto ries include a smartp hone.
We are not saying that we can ignore mobile devices—far f rom it. Mobile devices have clearly
demonstrated their a bility to be vulnerable. What we are saying is that we know t he threat actors
are already usin g a va riety of other methods to break into our systems, and we sho uld prioritize
our resources to focus on t he methods that they ’re using now.
When it comes to mobile devices on your network, the best advice we have is to strive f irst for
visibility and second for control. Visibility enables awareness, which will co me in handy when the
curre nt landscape s tarts to shif t. Control sh ould put you into a position to react quickly.
19 alcatel-lucent.com/solutions/malware-reports
20 “In M obile We Trust ( Somewh at)”
THAT NEW MALWARE SMELL
A quick look at the t ypes of malwa re being used shows they a re overwhelmingly oppor tunistic
and rela tively short-lived. Even t hough we looked a t data over just a si x-month period, 95% of
the malware types sh owed up for less than a month, while four ou t of five didn’t last beyond a
week. This cou ld be from the malware pig gybacking on th e short-lived pop ularity of legit
games a nd apps, or per haps it’s a direct ref lection of the g reat job we’re doing in the security
industry sh utting dow n malicious behavior…or perhap s just the first one.
95%
OF MALWARE TYPES
SHOWED UP FOR LESS
THAN A MONTH, AND
FOUR OUT OF FIVE
DIDN’T LAST BEYOND
A WEEK.
0%
10%
20%
30%
40%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
DAYS OBSERVED OVER 6 MONTHS
PERCENTAGE OF MALWARE
Figure 16.
Short-lived malware: Percentage
of malwa re by days obse rved over
six-month period
2015 DATA BREA CH INVES TIGATIONS R EPORT 21
Malware. Malware is what bwings us together today. This year, data from FireEye, Palo Alto
Networks , Lastline, an d Fortinet gave us a unique oppor tunity to peer into th e malevolent
machinations of criminals across nearly 10,000 organizations—large and small—in every industry
vertical over th e course of calendar year 2014.21 In previous years, we were only able to show how
malware contributed to conf irmed security incidents. This year, we drank straigh t from the fire
hose of breaches that might have been. Staring in to this malicious abyss renewed our admiration
and respect for those responsible for defendin g their organizations, and we hope our overv iew of
the volume, velocity, and variation of malware will first inform and the n inspire you to take your
security operations crew out for a round of drinks.
FAST AND FURIOUS? THINK AGAIN
Befor e we get down int o the weeds, we’ ll give you a num ber to discuss a round the wa ter cooler: Look ing
at just t he total numb er of malware eve nts (around 170 milli on) across all org anization s, we can perf orm
some eg regiously si mple math to det ermine that f ive malware events o ccur every seco nd.22
As we said , that’s simple math, and arriving at the ac tual malware t hreat-event frequency for any
given organization is nowhere near as cut-and-dried. To get a more pre cise handle on this, we looked
at the likelihood of an orga nization having a malware even t on any given day. It may be difficult to
believe, but not every or ganization experiences one of those ever y day.23 Our analyses of the data
showed t hat half the organiz ations experienced 35 or fewer days of caught malware events during
an entire calendar year. Keep in mind, by t he time it hits appliances, cont rols like firewalls, int rusion
detec tion systems (IDS)/intrusion prevention systems (IPS), spam filters, etc., will have already
reduce d the raw stream of malware. Speaking of these controls, when malware events are seen and
caught by them, it’s more likely to be dozens (or fewer) than hundreds or thousands.
Half of organizations discovered malware events during 35 or
fewer days in 2014.
Virtually every distribution we generated during our malware analysis was long-tailed. One thing
that me ans is that while the frequencies we’ve stated are true, they are still not the whole stor y.
For exam ple, Figure 17 show s the weekly average number of malware events for five industries:
Finan cial Service s, Insuranc e, Retail, Utili ties, and Educ ation.
There are noticeable spikes and lulls across each of th ese industries. The low average numbers
for Fina ncial Services could mean that that industry is better at filtering out phishing e-mails
before t hey arrive at the malwa re protection appliances, or is attacked with malware that ’s harder
21 O ne cavea t we need to c lear up a t the sta rt is tha t this is a ll analy sis on caught malwar e, whet her sai d snarin g is per forme d throu gh
sign ature s, heur istics , or sand box eval uation . The “O utside L ookin g In” sideb ar in thi s secti on gives s ome ins ight in to what g ets thr ough.
22 Now here nea r as impre ssive a num ber as th e fact tha t every s econd, 7 5 McDona ld’s bur gers are c onsume d (globa lly) and 5, 000 twe ets
are po sted. Ki nda makes y ou want to g rab a sala d and ditc h social me dia.
23 Reme mber, we’r e dealin g with mal ware cau ght by app liances u sually pl aced at th e perime ter. We did not h ave insig ht into t he eff icacy
of the p lacemen t of thes e devices .
MALWARE
Volume, Velocity, and Variation
Figure 17.
Count of ma lware events acr oss
indu stry verticals
Figure 17 shows the
weekly average number
of malware events for
five industries: Financial
Services, Insurance,
Retail, Utilities, and
Education.
2,500
5,000
7,500
10,000
0
2,500
5,000
7,500
10,000
AVERAGE
MALWARE
EVENTS:
350
FINANCIAL SERVICES
JAN APR JUL OCT J
AN
JAN APR JUL OCT J
AN
INSURANCE
AVERAGE
MALWARE
EVENTS:
575
# MALWARE EVENTS (/WEEK)# MALWARE EVENTS (/WEEK)
0
22 VERIZON ENTERPRISE SOLUTIONS
to detec t. In contrast, the prolific amoun t of malware hit ting education institutions could be the
byproduct of less-st rict policies and controls, or a sign that Education users are eas y pickings for
high-volume opportunistic threats.
One oth er thing it mea ns is that just be cause you haven ’t seen similar s pikes doesn’t m ean you won’t .
Make sure inciden t response plans include measures to handle a malware flood as well as a trickle.
The take away here is th at while we’ve pr ovided a baseli ne view of malwa re threat-even t frequenc y,
you sho uld be capturi ng this data in yo ur own enviro nment, usi ng it to unders tand how this ov erview
compa res to your own o rganizat ion, and anal yzing how you r organiza tion’s own view c hanges over ti me.
YOU’RE ABSOLUTELY UNIQUE—JUST LIKE EVERYONE ELSE
With volume and velocit y out of the way, it’s time to turn our attention to the amou nt of variation
(or uniqueness) across malware picked u p by our contributors. Consistent with some other recent
vendor repor ts, we found that 70 to 90% (depending on the source and organiza tion) of malware
samples are unique to a single orga nization.
We use “uniq ue” here from a signature/hash perspective; when compared by te-to-byte with all
other k nown malware, there’s no exac t match. That’s not to say that what the malware does is also
distinct. Criminals haven’t been blind to the signature- and hash-matching techniques used by an ti-
virus (AV) products to dete ct malware. In response, they use many techniques that introduce simple
modif ications into t he code so that the hash is unique, yet it exhibits th e same desired behavior.
The result is of ten millions of “dif ferent” samples of the “sam e” malicious program.
This is more than just the malware analyst form of omphaloskepsis (look it up). It has real-world
conseq uences, which basically boil down to “AV is dead.” Except it’s not really. Various forms of
AV, from gateway to host, are still alive an d quarantining nast y stuff every day. “Signat ures alone
are dead” is a much more appropriate mantra that reinforces the need for smar ter and adaptive
approaches to combating today’s hig hly varied malware.
There’s anoth er lesson here worth stating : Receiving a neve r-before- seen piece of malware
doesn’t mean it was an “advanced” or “targeted” at tack. It’s kinda cool to think they handcraf ted
a highly custom program just fo r you, but it’s just not true. Ge t over it and get ready for it. Special
snowflakes fall on ever y backyard .
24 The 200 5 analys es mostl y came fro m data in th e WildLi st, an ef fort st arted by J oe Wells an d Sarah Go rdon to ma intain a li st of
malic ious bina ries tha t are acti ve “in th e field” f or use by res earche rs and de fender s. If that w ave of nos talgia hi t you as har d as it did
us, you m ay be sur prised a nd please d to learn t hat the p roject i s still ac tive: wildlist.org /CurrentList.txt.
25 W here th e actua l family n ame cou ld be disc erned . Attri bution i s furt her mad e diff icult du e to the no nstan dard sig natu re namin g conven tions
betw een ven dors an d the fac t that so me vend ors, lik e FireEy e, are abl e to catch m aliciou s code be havior ally but a re not al ways abl e to
class ify it pr ecisel y. Perhap s y’all co uld at lea st stan dardi ze on/a. SEPara tor and f ield-o rder pa tter n before n ext yea r’s rep ort?
TAKE A WALK ON THE WILDLIST24
We managed to borrow a Wayback machin e to take a trip to 4 BD (before DBIR) to pluck some
research wisdom from o ne of our elder researchers. Specifically, we wan ted to compare one of
his findings f rom yesteryear against the c urrent malware climate to see how much (or little)
has changed.
The observation was that back in 2005, “just seven families represented about 70% of all
malcode a ctivity.” (For th ose interested, those were Mytob, Netsk y, Zafi, Sober, Lovgate,
Mydoom, and Bagle.) Fast-forward to 2014, and our analysis of t he data from our network
malware defense partners suggests that should be updated to read, “20 families represented
about 70% of all malware ac tivity.”25 (Today ’s “sinister seven” are zbot, rerdom, zeroacce ss,
andromeda, expiro, asprox, gamaru, and sality.)
The key dif ferences between the malcode of 2005 and malware of 2014 are that the
older viruses were noisy e-mail worms with var ying backdoor capabilities, w hereas the
common components of the 2014 “top seven ” involve stealt hy command-and-control botnet
membership, cred ential thef t, and some for m of fraud (clickfraud or bitcoin mining).
Alas, th ose were simpler times back in 2005.
70–90%
OF MALWARE SAMPLES
ARE UNIQUE TO AN
ORGANIZATION.
2,500
5,000
7,500
10,000
# MALWARE EVENTS (/WEEK)
0
2,500
5,000
7,500
10,000
2,500
5,000
7,500
10,000
# MALWARE EVENTS (/WEEK)# MALWARE EVENTS (/WEEK)
0
0
RE TA IL
AVERAGE
MALWARE
EVENTS:
801
UTILITIES
AVERAGE
MALWARE
EVENTS:
772
EDUCATION
AVERAGE
MALWARE
EVENTS:
2,332
JAN APR JUL OCT JAN
JAN APR JUL OCT JAN
JAN APR JUL OCT JAN
2015 DATA BREA CH INVES TIGATIONS R EPORT 23
OUTSIDE LOOKING IN
This “Before and Beyond the Breach” section paints a picture of the volume, velocity, and variation of malware by looking at the
problem from within organizations. Thanks to a new DBIR participant—Bit Sight—we can also take a look a t the view from the outside.
BitSight uses publicly accessible indica tors of compro mise to create a rating that measures the “security hygiene” of an
organization.26 Specifically, we combed throu gh BitSight’s botnet index (which is one component of the overall BitSig ht rating) to get a
feel for how frequently organizations are seen communicating with malicio us nodes.
An organiza tion’s BitSight rating (and the components t hat make up that rating ) will take a hit each time BitSight ’s monitoring
infrastructure sees a beacon at tempt from the IP space allocated to the com pany. We took the average number of bot net triggers in
2014 (for each company), then built a distribution across all organizations within an industry and compared those distributions across
all industries. Figu re 1827 shows a star k contrast bet ween five industries we’ve highlighted, which should be familiar from elsewhere
in this section: Financial Services, Insurance, Retail, Utilities, and Education .
(NOTE: BitSight refers to the time of first trigger to the time the beaconing stops as “Time to Fix” vs. “Beacon Days.”)
Financial institutions are not immu ne to successful malware deploy ments, but most of th em have relatively few (and other analyses
of the BitSigh t data show that f inancial institutions detect and generally clean u p infections pretty quickly). This compa res nicely
with threat-event data in Figure 18.
Insurance and Retail organizations begin to show more diversity—hence, more infections—with the situation getting worse as we
move to Utilities. Ultimately, the “leader ” in near-pervasive infections across the majority of underlying organizations is Education .
This should come as no surprise, given the regular influx of unmanaged devices as hordes of innocent youth invade our halls of
higher learning. To ga! To ga!
26 Rea d the BitS ight Ins ights r eport s for more i nforma tion on th eir meth odolog y: bitsight tech.com/resou rces/topic/bitsi ght-insights
27 Not e the log sc ale on the x- axis and f ree scal es on the y- axis.
0.00
0.25
0.50
0.75
1.00
1.25
FINANCIAL SERVICES
1 3 7 20 55
INSURANCE
1 3 7 20 55
RE TA IL
1 3 7 20 55
UTILITIES
1 3 7 20 55
EDUCATION
1 3 7 20 55
“TIME TO FIX” WITHIN INDUSTRY ORGANIZATIONS
DENSITY
Figure 18.
Distrib ution of “Time to F ix”
by ind ustry vertical
24 VERIZ ON ENTERP RISE SOLU TIONS
Figure 19 from the 2014 DBIR presented the frequency of incident patterns across the various
industry ver ticals. The major takeaway was that different industries exhibit substantially different
threat profiles and therefore cannot possibly have the same remediation priorities. That may be a
rather “no duh” finding , but keep in mind most security standards treat all requirements as equal
stepping stones on a path to 100% compliance. Past reports have emphasized that with security,
there is no “one size f its all” approach. It is our fervent hope that that data sowed some seeds of
change, and this year we’d like to help grow that crop a bit more.
Whereas last year’s report asked “Do all organizations share similar threat profiles?”, we now want
to explore what we believe to be a much better question: “ Which industries exhibit similar threat
profiles?” Just as our nine patterns helped to simplify a complex issue last year, we believe that
answering this question can help clarif y the “so what?” question for dif ferent verticals. Figure 19
measures and provides, at least in part, the answer to that question.28
28 To look up the t hree-d igit NAIC S codes, vi sit: census.gov/eos/www/naics/index.html
INDUSTRY PROFILES
Raising the Stakes with Some Takes on NAICS
With security, there
is no “one size fits all”
approach.
211
213
221
311
315
324
325
333
334
335
336
339 423
424
441
443
444
445
446 447
448
451
452
453
454
481
483
485
486
491
511
512
515
517 518
519
521
522
523
524
525
531
532
541
551
561
611
621
622
623
624
711
713
721
722
812
813
814
921
922
923
926
928
¬ Accommodation
¬ Administrative
¬ Education al
¬ Entertainment
¬ Financi al Services
¬ Healthcare
¬ Information
¬ Management
¬ Manufacturing
¬ Mining
¬ Other Ser vices
¬ Professional
¬ Public
¬ Real Estate
¬ Retail
¬ Trad e
¬ Transportation
¬ Utilities
Figure 19.
Clusterin g on breach data
across ind ustries
2015 DATA BREA CH INVES TIGATIONS R EPORT 25
Altho ugh we realize t hat at firs t glance it may loo k like a drunken as tronomer ’s attemp t at describin g
a faraway galax y, once correctly deciphered, Figure 19 is actually a godsend of interesting
obser vations. So, to provide you with the much-nee ded Rosetta Stone: Each dot represents an
industry “subsec tor” (we chose to use the three-digit NAICS codes—rather than the f irst two
only—to illustra te more specif icity in indu stry grou pings). The size of the dot relates t o the number
of incidents recorded for that subsec tor over the last t hree years (lar ger = more). The dis tance
between the dots shows how incidents in one subsector compare to that of another. If dots are
close together, it means incid ents in those subsec tors share similar VERIS charac teristics such
as threat actors, ac tions, compromised assets, etc. If far away, it means the o pposite. In other
words, subsec tors with similar threat prof iles appear closer together. Is that clear as mud now?
Good! W ith that out of t he way, let’s see what method we can draw from the madness.
SOME OF THESE THINGS ARE NOT LIKE THE OTHERS
Some of t hese things just don’t belong. Can you tell which things are not like the oth ers before we
finish this section?
As you can see, most subsector s appear to be more or less playing alo ng, but several others are
busy doing their own thing. Pu t another way, som e subsector s experience very different threats
than those faced by the majorit y. That’s in teresting on t wo differen t levels:
• One, it ’s a bit surprising that we see any semblan ce of “a majority” at all . However, this has
more to do with the wide panorama necessitated by the fringe minority. Zooming in enough
to exclude the outlier subsectors shows a much more even spread.
• Two, it begs the question, “ What is it abou t these fringe subsec tors that makes their t hreat
profiles so ex traordinar y? ” A closer look at the three most distant o utliers—pipeline
transportation (486), oil and gas ex traction (211), and suppor t activities for mining (213)—
reveals a very in teresting con nection: Namely, they form part of the energy supply chain.
IT’S MO RE OF A FONDUE THA N A SALAD
The U.S. is traditionally described as a homogen ous “melting pot” of cultures, but some sug gest
it’s more like a sala d bowl where individual cultu res mix together while retaining their own uniq ue
aspects. It ’s interesting to apply t his motif to Fig ure 19.
There are a few closely grouped subsectors (e.g., the 4 4x retailers on the upper side of the main
pack), but by and large, the colors/numbers intermingle in melting-pot fashio n. And that ’s a rather
impor tant discovery. It means that many subsectors in dif ferent industries actually share a
closer threat profile t han do subsec tors in the same overall industry.
Many subsectors in different industries actually share a closer threat
profile than do subsectors in the same overall industry.
For instance, see the bot tom of the figure where Monetar y Authorities-Central Bank from
the financial and insurance indust ry (521) falls between two subsec tors in the man ufacturing
industry (32x). In other wo rds, each of the manufacturing subsectors have more in common with
central bank s than they do with each other. You know, sor t of like how the majority of us have
more in common with our f riends than we do with our families.
I CAN’ T BELIEVE THOSE TWO AR E DATING
Similar to but separate from observation two is that some subsector neighbors seem as th ough
they were bad matches on T inder. For instance, why are general m erchandise stores (452) rig ht
on top of da ta processing , hosting, and related services (518)? If I had a dollar fo r every time
someone said, “I bet this data center sees the same attacks as my local mall,” I’d still be broke.
There’s been some dirty laundry aired about athletes of late, but spectator sports ( 711) and
laundry ser vices (812)? Seriously? Also, w hat’s the deal with executive, legislative, and other
general government support (921) overlapping with amusement, gambling, and recreation
industries (713)? Wait—never mind; do n’t answer that.
The fac t that these “close cousins” may seem like strange bedfellow s highlights the need for more
thoug htful and thorou gh research in to risk profiles across various types of organiza tions.
Incidents in many
industry subsectors
share similar VERIS
characteristics such
as threat actors,
actions, compromised
assets, etc.
26 VERIZON ENTERPRISE SOLUTIONS
Maybe we don’t understand the motives of our advers aries as well as we think we do. Maybe
cybe r risk has more to do with business models or organi zational struct ure or company policies
than which high-level industry categor y one falls under. We definitely have some more work to do
to peel back the covers on this topic.
WE NEED MORE CROSS-SECTOR SHARING.
HOW COME EVERYBODY WANNA KEEP IT LIK E THE KAISER?
Likewise, information sharing, compliance, and regulatory standards imposed on an industry level
may not be the best approach. Perhaps regulating com mon “risk activities” is the better route
(e.g., h ow the Payment Card Ind ustry Data Security Standard applies to all those who process ,
store, o r transfer pay ments rather than any one particular industr y). Maybe it’s some oth er
way/means/criterion we haven’t thought of yet. But it’s clear that before we begin creating and
enforcing a bunch of “cy ber regulations” in the wake of t he “cyber craziness” that was 2014, we
need to better u nderstand the true effects and eff icacies of such ac tions.
It follows that our standard practice of organizing information-sharing
groups and activities according to broad industries is less than optimal.
It might even be counterproductive.
Given the above, it follows that our standard prac tice of organizing information-sharing
groups and ac tivities according to broad industries is less than optimal. It might even be
counterproductive. Is this a case wh ere our biases and fault y assumptions are blinding us? (Say it
ain’t so!) With all the focus, innovation, and regula tion around cyber-info/intel sharing these days,
this is something we really need to consider and investigate fur ther.
Information sharing,
compliance, and
regulatory standards
imposed on an industry
level may not be the
best approach.
2015 DATA BREA CH INVES TIGATIONS R EPORT 27
If we had $201 for every time someone asked us, “Do you have da ta on the cost of breaches?”, we’d
have $128,037.29 For the past seven year s, we’ve had to answer that ques tion with an apologetic
“No,” while doing o ur best to explain why.30 But not this time; we’re absolutely ecsta tic to offer an
anticipator y “Yes!” to that question in t his long-overdue sec tion. It took us eight years to get h ere,
but “better eight t han never,” right?
That we always get the impact question is comple tely understandable. When budgeting and
operating an In foSec program, accu rately assessing what ’s likely to happen and how much it’ll
cost are both critically impor tant. A lack of reliable estimates leads to a creative environment
for decision making,31 where underspending, overspending, and useless spending invariably
result . Regrettably, there is a large an d glaring gap in t he security industry when it comes to
quantifying losses. To fill that gap, organizations t ypically use qualitative meth ods of rating loss
or something like the cost-per-record estima te promoted in the 2014 Cost of Data Breach Study
from surveys conducted by the Ponem on Institute.
29 Ass uming th at’s th e average c ost per qu estion .
30 Sho rt answ er: Our fo rensic in vestig ators ar en’t pai d to quant ify loss es and no ne of the ot her DBIR co ntrib utors ha s ever pro vided
loss da ta outsi de of paym ent card f raud to tals.
31 C alibrat ed magic r isk-ball s ays: “Bu y DLP.”
IMPACT
In the Beginning, There Was Record Count
Figure 20.
Cost per reco rd by records lost (n=191)
Our approach to
estimating loss is
based on actual data
and considers multiple
contributing factors—
not just number
of records.
0.01
0.10
1
10
100
1k
10k
100k
10 100 1k 10k 100k 1m 10m 100m
RECORDS LOST
COST PER RECORD (US$)
28 VERIZON ENTERPRISE SOLUTIONS
In this se ction, we see k to build an alternative —and more accurate—a pproach to est imating loss
that is based on actual data and consider s multiple cont ributing fac tors (not just number of
records). This is m ade possible through a n ew DBIR contributor, NetD iligence, which part ners with
cybe r-insuran ce carriers to agg regate data on c yber liabilit y insurance cl aims and produ ces its own
annual Cyber Liability & Data Breach Insurance Claims stud y. Fro m the data provided, we e xtracted
191 insurance claims with loss of payment cards, per sonal inform ation, and personal m edical
records, as well as s ufficient detail to c hallenge a few exi sting theories and test some n ew ones.
58 CENTS : GET FIT OR DIE TRYIN’
The established cost-per-record am ount for data breaches comes from dividing a sum of all loss
estimates by total records lost. T hat formula estimates a cost of $201 per record in 201432 and
$188 the yea r before.33 Aside from the inherent “ flaw of averages,”34 the cost-per-record model
is often used by organi zations in ways that were unintended by t he authors (w ho recommend not
applying the model to breaches exceeding 100,000 records). This approach has the advantage of
being sim ple to calculate, remember, and apply. But is estimatin g impact a simple task, a nd does an
average cost-per-record model accurately fit real-world loss data? Let ’s investigate that further.
If we appl y the average cos t-per-record a pproach to the loss claims data, we get a rath er surprising
amount: $0. 58. You read that right—the ave rage cost of a data b reach is 58 cent s per record! T hat’s
a far cr y from the roug hly $200 figure we’re f amiliar with. What ’s going on here? Par t of the issue i s
the exclusion of br eaches over 100,0 00 records in the exis ting model com bined with the i nclusion of
soft costs th at don’t show u p in the insurance claims data. T he other par t of that answe r is supplied
by Figu re 21, which plots records lo st vs. cost per re cord (on a log scale).
The smaller breaches toward the left in Figure 21 average o ut to more (often a lot more) per-
record costs than the larger breaches. Toward the extreme right end of the scale (100 M), the cost
per record can drop down to just a penny or two. Also, don’t let what looks to be a nice and even
spread deceive the eyes into seeing a linear relationship; the fact t hat this is on a log scale35 is a
very good indication that the records-to-cost relationship is not linear.
32 Pon emon, L arry. 2014 Cost of Data B reach Stud y: Global A nalysis. Ponemon Institute sponsored by IBM Corporation . Retrieved
Febr uary 2015 ( 2014).
33 Pon emon, L arry. 2013 Cost of D ata Breach Stu dy: United Sta tes. Ponem on Inst itute sp onsore d by Symant ec. Retr ieved Feb ruar y
2015 (2014).
34 Sava ge, Sam L . The Flaw of Avera ges: Why We Un derestima te Risk in the Face o f Uncertai nty. John W iley & Son s, 2009.
35 Log s cales in crease by a n order of m agnit ude. In th is secti on, each m ark on th e axes is 10 tim es the pr evious ma rk. Plo tting o n a log
scale i s a common t echniq ue for pre sentin g data tha t, for ins tance, e xhibit s expon ential g rowth o r decline .
Figure 21 .
Total claim am ount by record s lost (n=191)
58¢
AVERAGE COST PER
RECORD WAS 58¢,
HOWEVER THIS IS A
VERY POOR ESTIMATE
OF LOSS, SO WE BUILT A
BETTER MODEL.
10
100
1k
10k
100k
1m
10m
100m
10 100 1k 10k 100k 1m 10m 100m
RECORDS LOST
PAYOUT (US$)
¬
Our ave rage cost per reco rd of 58¢
¬
Ponem on’s 2014 cost per record
of $201 (up to 100k records)
¬
Our esti mate using ou r
improv ed model
2015 DATA BREAC H INVESTI GATIONS REP ORT 2 9
Sure enough, another log-scale plot of records lost to total cost in Figure 22 (not per-record cost
as in Figure 21) shows a rather clear relationship. For funsies, we threw a red line onto Figure 21
for the $0.58-per-record model derived from this data, a green line for the $201 per record put
forth by Ponemon, and a blue line that represents a log-log regression model 36 that achieved the
best fit to the data. It’s apparent that the green and red models will vastly underestimate smaller
breaches and overestimate the megabreaches. NetDiligence captured our sentiments about such
an approach per fectly when it said, “Insurers should not feel comfortable estimating potential
losses using any standard cost-per-record fig ure,” and we couldn’t agree more. Both the $0.58
and $201 cost-per-record models (red and green lines) create very poor estimators, while the
log-log model (blue) follows the nonlinear behavior of the data.
RECORDS TELL ONLY HALF THE STORY
Developing a “bet ter” model is one thing, but the real question is whether it’s a good model.
Who wants a weak model that spits out a number that is all but guaranteed to be wrong? For that,
you can just use a pair of D20 risk dice. There are t wo main aspects to the goodness of a model: 1)
how well it fits the data, and 2) how precise its predictions will be. Stats nerds measure the first
aspect using the coeff icient of determination (or R2), which calculates the percentage of stuff
going on in this data (or variance for the initiated) that is ex plained by the model. A low R2 tells
us there’s a lot happening that the model isn’t capturing , while a high R2 indicates a good fit.
The R2 value of our better model (the teal line in Figure 22) is 0.537, meaning it only describes
about half of the total variance in the data. Said differently, there’s a lot of stuf f contributing to
the cost of breaches besides the number of records lost. Said even differently-er, records tell
us only half the story when it comes to impact. Unfortunately, our buddy R2 can’t tell us exactly
what those secret factors are. Perhaps having a robust incident-response plan helps, or keeping
lawyers on retainer, or prenegotiated contracts for customer notification and credit monitoring,
or perhaps reading the DBIR religiously would help. All we can do is speculate, because whatever
it is, we just know it isn’t in the claims data (thoug h our money is on DBIR reading).
The forecast average loss for a breach of 1,000 records is between
$52,000 and $87,000.
Since our glass of model strength is only half full, the precision of the model will suffer a bit.
This means we need broad ranges to express our confidence in the output. On top of that, our
uncer tainty increases ex ponentially as the breach gets larger. For example, with the new model,
the average loss for a breach of 1,0 00 records is forecast to be between $52,000 and $87,000,
with 95% confidence. Compare that to a breach affecting 10 million records where the average
loss is forecasted to be between $2.1 million and $5.2 million (note that these are average losses,
36 Look for m ore detai ls behind t his mode l in the comi ng year.
Figure 22 .
Expecte d average loss by record s lost
Our new breach-cost
model accounts for the
uncertainty as record
volume increases.
0
5,000,000
10,000,000
15,000,000
10m 50m 100m
NUMBER OF RECORDS
EXPECTED LOSS (US$)
SHADED REGION REPRESENTS
THE ESTIMATED AVERAGE LOSS
WITH 95% CONFIDENCE
30 VERIZON ENTERPRISE SOLUTIONS
not single-event losses; see below). Fig ure 22 gives a visual representation of the model and
accuracy. The teal line is the single-poin t estimate, and the shaded area is our con fidence around
the average loss. As the record cou nt increases, the overall prediction accuracy decreases and
the shaded confidence inter val widens to account for the growing uncertaint y. Say what you like
about the tenets of wide-con fidence intervals, dude; at least it’s an e thos.
IT’S ALL ABOUT THAT BASE (NO. RECORDS)
So what else mat ters besides the base record count when it comes to breaches? To help answer
that, we conver ted the claims data set into VERIS format to test things like w hether insiders
caused more loss than ou tsiders and if lost devices led to higher impact than net work intrusions.
After countless perm utations, we found many significant loss factors, but every single one of
those fell away when we controlled for record count . What this means is that every technical
aspect of a breach only ma ttered insomuch as it was associated with more o r less records lost,
and therefore more or less total cost. As a n example, larger orga nizations post higher losses
per breach, bu t further investigation reveals the simple truth that they just t ypically lost more
records than smaller organizations , and thus had higher overall costs. Breaches with equivalent
record loss had similar total costs, independent of organizational si ze. This theme played through
every aspect of data breaches that we analyzed. In other words, ever ything kept pointing to
records and that technical effort s to minimize the cost of breaches should focus on preventing or
minimizing compromised records.
Keep in min d that we’re not saying record count is all that mat ters; we’ve already demonstrated
that it accounts for half of the story. But it’s all that seems to mat ter among the data poin ts we
have at ou r disposal. What we’ve learned here is that while we can create a better model than
cost per records , it could be improved fur ther by collecting more and dif ferent data, rather than
specifics about the breach, to make better models.
LET IT GO, LET IT GO
The cold (cos t-per-record) f igure never bothered us any way, but we think it’s time to turn away
and slam t he door. To that end, we wrap up this section with a handy lookup table that includes a
record count and the single-point prediction that you can use fo r “just give me a number ” requests
(the “expected column ” in the middle). The rest of the columns show 95% confidence intervals,
first for the average loss and then the predicted loss. The average loss should contain the mean
loss (if there were multiple incidents). The predic ted loss shows the (rather large) estimated
range we should ex pect from any single event.
RECORDS PREDICTION
(LOWER)
AVE R AGE
(LOWER)
EXPECTED AVER AGE
(UPPER)
PREDICTION
(UPPER)
100 $1,170 $18,120 $25,450 $35,730 $555,660
1,000 $3,110 $52,260 $ 6 7,4 8 0 $87,140 $1,461,730
10,000 $8,280 $143,360 $178,960 $223,400 $3,866 ,400
100,000 $21,900 $366,500 $474,60 0 $ 614, 6 0 0 $10,283,200
1,000,000 $57, 6 0 0 $892,400 $1,258,670 $1,775,350 $27,500,090
10,000,000 $150,700 $2,125,900 $3,338,020 $5, 241, 300 $73,943,950
100,000,000 $392,000 $5,016,200 $8,852,540 $15,622,700 $199,895,100
The table should be easy to read. If you’re an optimist, steer to the left. FUDmongers should
veer to th e right. However, looking at this table with its wide ranges, there is definitely some
oppor tunity for improving the estimate of loss from breaches. B ut at least we have improved on
the over simplified cost-per-record appro ach, and we’ve discovered that technical efforts should
focus on preventing or minimizing compromised records.
Figure 23.
Range s of expected loss
by num ber of records
Larger organizations
have higher losses
per breach, but they
typically lose more
records and have higher
overall costs.
2015 DATA BREA CH INVES TIGATIONS R EPORT 31
During the production of the 2013 DBIR we had t he crazy idea that there must be a way to reduce
the majority of attacks into a hand ful of attack patterns, and we proved our theory with great
success in the 2014 DBIR . We used the sa me hierarchical clustering technique on the 2015 corpus
and—lo and behold—it worked again (data science FTW!).
The headliner f rom the 2014 DBIR was that 92% of all 100,000+ incidents collec ted over the
last 10 years fell into nine basic patterns. Thankfully, that finding held true this past yea r as well
(96%), so we avoid getting eg g on our face. Yay.
While the threats against us may “seem” innumerable, infinitely varied,
and ever-changing, the reality is they aren’t.
This is nifty from a data-wonk perspective, but the real power of that statistic lies in what it
means for security risk managemen t. It sugges ts that, while the threats against us may seem
innum erable, infinitely varied, and ever changing, th e reality is they aren’t. This certainly doesn’t
diminish the sig nificant challenges faced by defenders, but it does imply a threat space that is
finit e, understa ndable, and at least somewhat m easurable. If t hat is indeed the case —and 11 years
of data is a p retty strong baseline—then threats may just be more manageable than some of the
we-should-all-just-give-up-now-because-our-adversaries-are-superhuman crowd likes to promote.
INCIDENT CLASSIFICATION PATTERNS
Figure 24.
Frequen cy of incident cla ssification
patter ns across security i ncidents
0.1%
0.7%
0.8%
3.9%
4.1%
15.3%
20.6%
25.1%
29.4%
P
AYMENT CARD SKIMMERS
POS INTRUSIONS
CYBER-ESPIONAGE
DENIAL OF SERVICE
WEB APP ATTACKS
PHYSICAL THEFT/LOSS
INSIDER MISUSE
CRIMEWARE
MISCELLANEOUS ERRORS
96%
WHILE WE SAW MANY
CHANGES IN THE
THREAT LANDSCAPE IN
THE LAST 12 MONTHS,
THESE PATTERNS
STILL COVERED THE
VAST MAJORITY OF
INCIDENTS (96%).
32 VERIZON ENTERPRISE SOLUTIONS
There are a few interesting things to note about the breakdown of incident patterns. Let’s start
with Figure 24, which addresses all security incidents repor ted for 2014. It may not be obvious at
first glance, but th e common denominator across t he top four pat terns—accounting for nearly
90% of all inciden ts—is people. Whether it’s goofing up, get ting infecte d, behaving ba dly, or
losing stuff, most incidents fall in the PEBKAC and ID-10T über-patterns. At this point, take your
index f inger, place it on your chest, and repeat “I am the problem,” as lon g as it takes to believe it .
Good—the first step to recover y is admitting the problem.
With that uncomfor table intervention out of the way, let’s hurriedly shift conversation to
Figure 25, which focuses on confirm ed data breaches. It doesn’t remove the user aspect entirely,
but it does allow us to point the finger in a dif ferent direc tion.37 POS breache s jump up to the pole
position, whic h shouldn’t be too much of a shocker g iven the headlines in 2014. Crimeware is still
#2, but notice the difference in volume between Figures 24 and 25: It essen tially contrasts the
stuf f that makes you r mom’s machine r un like an 80386 versus the more malicious kits designed
to pilfer data. The fact that Cyber-Espionage ranks hi gher than Insider Misuse and Web App
Attack s is rather sur prising. It ’s hard to discern from th e data if that’s due to legitimate trends,
contributor foci, low-fidelity data, or a mi x of all the above (probably the latter).
Did Payment Card Skimmers and POS Intrusions go extinct in 2012?
Nope. We just tripled contributors that year and brought in a large
volume of new threats.
Showing Figure 25 is risky because it may cause more confusion than valid conclusions, but
what the heck—we live on the edge. Although we’d like it to purely reflect changes in the
external threat environment over the years, it more realistically reflects changes to our data
set caused by a rapidly expanding base of contributors. Did Payment Card Skimmers and Point-
of-Sale Intrusions really go ex tinct in 2012? Nope. We just tripled contributors that year and
brought in a large volume of new/different threats (e.g., Miscellaneous Errors). Given that kind
of volatility in the data set, it’s amazing that some, like Insider Misuse and Web App Attacks,
remain quite stable over time. Figure 26 gives a breach-centric view of this same concept.
37 For n ow, igno re the fac t that mo st of the se breach es still i nvolve so me kind of i ndirec t error or o missio n.
Figure 25.
Frequen cy of incident cla ssification
patter ns with confirm ed data breache s
(n=1,59 8)
0.1%
3.1%
3.3%
8.1%
9.4%
10.6%
18%
18.8%
28.5%
DENIAL OF SERVICE
PAYMENT CARD SKIMMERS
PHYSICAL THEFT/LOSS
MISCELLANEOUS ERRORS
WEB APP ATTACKS
INSIDER MISUSE
CYBER-ESPIONAGE
CRIMEWARE
POS INTRUSIONS
A lot of threat patterns
didn’t reveal major trend
changes. For this reason,
some may wish to refer
back to the 2014 DBIR
for a primer on incident
patterns.
2015 DATA BREA CH INVES TIGATIONS R EPORT 33
So, take whatever you can from Figures 25 a nd 26, but don’t say we did n’t warn you about the
dangers of building your f ive-year risk projec tions around them. View them more like puzzles that
we’re piecing together over time.
Figure 27 delivers ano ther twist on incident pat tern prevalence by adding in the threat actor
element. The connec tion between state-aff iliated groups and espionage earns the Captain
Obvious award, but we though t the other pairings were wor th showing.
We gave our data visu alization ex perts the c hallenge of maki ng an even more in formation- dense
version of Figu re 19 from last year ’s report . Figure 28, o n the next pag e, is what they c ame up with.
Not only does it sh ow the freque ncy of breach es and distri buted denial- of- service (DDoS) patte rns
across i ndustries, but also a three -year trend vi a the bar char ts in the back ground. To use Fig ure 29,
identify you r industry in the ri ght-hand colum n. Refer to the NAICS website38 if you’re unsure where
38 census.gov/cgi-bin/sssd/naics/naicsrch?chart=2012
Figure 27.
Count of in cident classifi cation pattern s
over time w ith confirmed d ata breaches
¬ Inside r Misuse: 129
¬ POS Intrus ions: 419
¬ Cyber-Espionage: 290
¬ Paymen t Card Skimmers : 10 8
¬ Web App Atta cks: 458
¬ Physical Theft/Loss 35
¬ Crimewa re: 287
¬ Miscella neous Errors: 11
2006
2007
2008
2009
2010
2011
2012
2013
2014
Figure 26.
Frequen cy of incident cl assification
patter ns over time acro ss security inci dents
2006
2007
2008
2009
2010
2011
2012
2013
2014
0
0.2
0.4
0.6
0.8
1
¬ Web App Atta cks
¬ Insider Misuse
¬ POS Intrusions
¬ Paymen t Card Skimmers
¬ Miscellaneous Errors
¬ Physical Theft/Loss
¬ Denial of Service
¬ Cyber-Espionage
¬ Crimeware
34 VERIZ ON ENTERP RISE SOLU TIONS
your organization fits. The percentages are relative to each industry. For example, POS attacks
represent 91% of all Accommodation breaches. The coloring should help you quickly identify hot
spots for your industry and/or discern differing threat profiles across multiple industries.
Repeat readers will find this year’s incident pat tern sections quite a bit shorter than last year.
Besides making room for the “Before and Beyond the Breach” segment, there are two main
reasons for this tack: 1) A lot of the data lacked the details necessary to dig deep enough to strike
new gold, and 2) a lot of the threat patterns didn’t reveal major trend changes. Honestly, how
much can the underlying forces of Physical Theft/Loss change in a year’s time?
For this reason, some may wish to refer back to the 2014 DBIR for a primer on the incident
patterns. In the following sections, we aim to highlight new, interesting, insightful, and instr uctive
nuggets of wisdom rather than restate the basics. It’s our hope that this to-the-point approach
strikes a good and useful balance. 39
39 If you wan t to see how we ll your own o rganiz ation fa res with t hese stat s or if you wan t to get mor e insight i nto the pa ttern s,
take a loo k at the Splu nk app for D BIR, at splunkbase. splunk.com/.
Figure 29.
Frequen cy of data disclosures b y incident
patter ns and victim ind ustry
Figure 28.
Frequen cy of data breaches by in cident
patter ns and threat acto r
CRIMEWARE
CYBER−
ESPIONAGE
DENIAL OF
SERVICE
PHYSIC AL
THEFT / LOS S
MISCELLANEOUS
ERRORS
PAYMENT
CARD SKI MMERS
POINT OF
SALE
INSIDER
MISUSE
WEB APP
ATTAC KS
3%
73%
41%
5%
97%
3%
31%
5%18%2%6%
6%
1%3%
61%
20%
3%
22%
ACTIVIST
ORGANIZED
CRIME
STATE −
AFFILIATED
UNAFFILIATED
1%
32%
36%
1%
14%
34%
25%
51%
11%
9%
15%
4%
37%
60%
14%
8%
52%
5%
1%
11%
2%
16%
2%
25%
2%
3%
2%
27%
26%
13%
7%
32%
5%
17%
10%
23%
14%
7%
10%
91%
73%
12%
8%
5%
70%
5%
45%
9%
7%
11%
26%
7%
4%
79%
33%
4%
11%
3%
1%
18%
9%
7%
31%
9%
35%
1%
8%
4%
6%
5%
ACCOMMODATION
ADMINISTRATIVE
EDUCATIONAL
ENTERTAINMENT
FINANCIAL SERVICES
HEALTHCARE
INFORMATION
MANUFACTURING
MINING
OTHER SERVICES
PROFESSIONAL
PUBLIC
RETAIL
CRIMEWARE
CYBER−
ESPIONAGE
DENIAL OF
SERVICE
PHYSIC AL
THEFT / LOS S
MISCELLANEOUS
ERRORS
PAYMENT
CARD SKI MMERS
POINT OF
SALE
INSIDER
MISUSE
WEB APP
ATTAC KS
2015 DATA BREAC H INVESTI GATIONS REP ORT 35
POINT-OF-SALE
INTRUSIONS
PAYMENT CAR D
SKIMMERS
CRIMEWARE W EB APP
ATTA CK S
DENIA L-O F -
SERVICE ATTACK S
PHYSICA L
TH EFT/ LOS S
INSIDER
MISUSE
MISCELLANEOUS
ERRORS
CYBE R-
ESPIONAGE
We debated at length40 whether to rename this pattern to “The POS Paradox ” or keep it as just
plain ol’ “Point-of-Sale Intrusions.” You can see where we ended up, but you might wan t to pop
some more popcorn as we take you on a walk down memor y lane to see where POS incidents have
been and where they are today.
When POS breaches were at their peak (back in the 2011 and 2012 DBIR s), there was little buzz
about them in information security circles. We suspect that’s because those breaches generally
involved small businesses and low-dollar amounts. In truth, it seemed a bit strange to us to make
a big deal ou t of 43 pwnd PANs from “Major Carl’s Italian Eats” too, especially given the jackpot
hauls of just a few years earlier.
After the dust settled from prosecutions of perpetrators involved in the megabreaches in the
2005–2009 time frame, we were beginning to think that massive payment card plunders were
becoming a bit passé—with smaller, opportunistic POS intrusions becoming commonplace. The
fruit ful combination of Internet-facing POS devices and default passwords made compromise
trivial for attackers, and the smaller amounts of compromised data mixed with the lack of log ging
(or any controls, really) limited the likelihood of getting caught.
Then Q4 of 2013 happened, cr ushing the idea that high-profile, headline-grabbing, payment card
breaches had been put out to pasture, with Code Red, SQL Slammer, and phone phreaking. The
evolution of attacks against POS systems continued in 2014 with large organizations suffering
breaches alongside the small retailers and restaurants that had been the cash cows for years.
Despite the actors and actions being the same for the majority of breaches, the impacts on large
and small organi zation POS breaches are far from identical, as seen in Figure 30.
40 Yep, we did. Th at’s how we r oll. But , we’re really f un at par ties. Hon est.
POINT-OF-SALE INTRUSIONS
Figure 30.
Compro mised payment ca rd reco rds from
assets by orga nizational size (sma ll is le ss
than 1,00 0 employees) over time
2013
2014
2015
2009
2010
2011
2012
POS (Small)
POS (Large)
Everything Else (Small)
Everything Else (Large)
Databases (Small)
Databases (Large)
Most affected industries:
Accommodation,
Entertainment,
and Retail
36 VERIZON ENTERPRISE SOLUTIONS
There has been a definite evolution in POS attacks fro m simple storage scraping to active RAM
skimming across all breach types. We can , however, see distinct dif ferences bet ween large and
small organizations in the methods used to gain access to t he POS devices. For small orgs, the
POS device is directly ta rgeted, normally by guessing or brute-forcing41 the passwords. Larger
breaches tend to be a multi-step attack with some secondary system being breached before
attacking the POS system. 42
In 2014, the evolution of attacks against POS systems continued,
with large organizations suffering breaches alongside the small
retailers and restaurants.
Criminal innovation is not limited to the Payment Card Skimmers pattern. 43 Last year, there
were seve ral instances w here vendors providing POS services were the source of the
compromise. Some vendors had keylog gers installed via successful p hishing campaigns or
network penetrations. All breached POS vendors ended up with their remo te access creden tials
compromised , inviting at tackers into customer environmen ts where the card har vesting bega n.
We also noticed a trend in a shif t from a reliance on default credentials to the capture and use of
stolen credentials. These are also not m ere opportunistic attacks. Many incidents involved direct
social engineering of s tore employees (of ten via a simple phone call) in order to t rick them into
providing the password needed for remote access to the POS.
Attack s on POS systems are not new, and they are relevant to organi zations big and small that
are swiping cards to collec t revenue. The attack methods are becoming more varied, even against
small businesses. This is an indication that the t hreat actor s are able to adapt , when necessary, to
satisfy their motives (and greed will not be trending down any time soon).
HOW DO I LEARN MOR E?
Find ou t what monitoring options are available for your POS environment (if any) and star t using
them. Your level of diligence must match the increased level of sophistication and p atience being
demonstrated by the hackers.
While we have tried to refrain from best prac tices advice this year, there’s no get ting around t he
fact that cred entials are literally th e keys to the digital kingdom. If possible, improve them with a
second factor such as a hardware token or mobile app and monitor login activity with an eye o ut
for unusual pat terns.
41 396 inci dents in t he DBIR cor pus.
42 T his is eer ily simila r to cases i n the Cyber- Espiona ge patte rn.
43 At l east som e enterp rises , albeit cr iminal o nes, are u sing Six Si gma ef fectiv ely.
Larger breaches tend to
be a multi-step attack
with some secondary
system being breached
before attacking the
POS system.
POINT-OF-SALE
INTRUSIONS
PAYMENT CAR D
SKIMMERS
CRIMEWA RE W EB APP
ATTA CK S
DENIAL-O F-
SERVICE ATTACKS
PHYSIC AL
TH EFT/ LOS S
INSIDER
MISUSE
MISCELLANEOUS
ERRORS
CYBE R-
ESPIONAGE
2015 DATA BREA CH INVES TIGATIONS R EPORT 37
Long-time readers of the DBIR can no doubt recite the core elements of this pattern by
chapter and verse: Eas tern European actors targe t U.S. victims through skimming devices
on ATMs and gas pumps.44
Unsur prisingly, little has changed. So little, in fac t, that we’ll ask you to keep last year’s sec tion
open to pages 35 and 36 while we hone in on one bit of good news in the 2015 da ta set: In
instances where law enforcement can determine the start of a skimming at tack, detec tion times
are def initely getting bet ter, shifting from mont hs and weeks to hours and days.
One bit of good news: Detection times are definitely getting better,
shifting from months and weeks to hours and days.
OUT OF SIGHT, OUT OF CASH?
The stories in this pat tern may read like ancient s agas, but the actors continue to innovate.
Previous DBIRs document the use of locally mounted pinhole cameras and remote cameras (both
designed to obtain the coveted PIN) a nd the use of remo te stripe-data collec tion via Bluetooth®
or cellula r devices. This year’s improvements include the use of ridiculously thin and translucent
skimmers that fit inside the card reader slot, as well as direc t tapping of the device electronics
to capture the data with nary a trace of visibilit y. Gone (mostly) are the days of the quick t ug to
test for t he presence of these devices. Still, all it really takes to thwar t certain classes of these
card-present cybercrime advancements is shielding the video capture compo nent with your hand;
and—rememb er—be as creative as you like when doing so.
44 2014 DBI R, Pat tern Ch apter 6, P aragra ph 1, Verse 1.
PAYMENT CARD SKIMMERS
Figure 31.
Time to dis covery within Pay ment Card
Skimm ers pattern (n=22)
36.4%
27.3%
4.5%
9.1%
0%
4.5%
18.2%
0%
SECONDS MINUTES HOURS DAYS WEEKS MONTHS YEARS NEVER
Most affected industries:
Financial Services
and Retail
POINT-O F-SALE
INTRUSIONS
PAYMENT CAR D
SKIMMERS
CRIMEWA RE W EB APP
ATTA CK S
DENIAL-OF-
SERVICE ATTACKS
PHYSIC AL
TH EFT/ LOS S
INSIDER
MISUSE
MISCELLANEOUS
ERRORS
CYBE R-
ESPIONAGE
38 VERIZON ENTERPRISE SOLUTIONS
CHIP AND SKIM
In October of 2015, the Europay, Master Card, and Visa (EMV) chip-and-PIN ma ndate goes into full
effect in the U.S., just as we learn that poor im plementations are still left v ulnerable to at tack.45
Furthermore, despite a date being set , it will take time to deploy new equipment to a critical mass
of merchants a nd to reissue cards to the still unPINned masses.
U.S. consumer s who are eagerly awaiting the deadline m ay want to curb their enthusiasm just a
bit. The main cha nge46 that is taking place is an invisible (to the consumer) shift in liability. You’ll
still see m ag-stripe readers aplent y, and when there is an incidence of card fraud, whichever party
has the lesser technology—m erchants who haven’t upgraded their terminals or banks that haven’t
issued n ew EMV cards—will bear the blame.
Figure 32 tosses another wet blanket47 on heated expec tations as it shows that the use of old-
school c ards remains high even in some reg ions with a plethora of new-school hardware; and, lest
we forge t, the U.S. will be playing catch up w ith the rest of the globe for many years.
So, while we can (hopefully) expect to even tually see an overall redu ction in physical skimm er-
related inciden ts, attackers will:
1. Initial ly continue to m ove to the easies t, current t argets (i.e. , areas with t he lowest ado ption rates).
2. Potentially increase the pace of cu rrent skimming activities (to get ahead of EMV ad option).
3. At tempt to exploit weakn esses that still surround EMV implementations.
4. Apply their technical and criminal prowess to other targe t-rich, yet related, vectors such as
card-not-present/online transactions.
HOW DO I LEARN MOR E?
Merchants should wor k with their providers to under stand their chip-and -PIN reader options
and look for solutions that are less prone to indirect attacks. Don’t just replace one bad bit of
technology with another.
Monito r your physical c ard environments t hrough vide o surveillance and tam per monitori ng to help
continue the p ositive shift in time to d etect (which will also hel p reduce overall m erchant liability).
For those merch ants who deal primarily in card-not-present or online transactio ns, you might w ant
to consider upping your game when it comes to fraud monitor ing (you do have fra ud monitoring
systems/processes in place now, right?) and ensuring you have response plans in place when frau d
eventually hap pens (and it will).
45 M ike Bond , Omar Cho udary, St even J. Mur doch, Se rgei Sko roboga tov, and Ros s Anders on, Chip a nd Skim: Clo ning EMV Cards w ith the
Pre-Pla y Attack, Co mputer L abora tory, Uni versit y of Camb ridge, UK , 2012. cl.cam.ac.uk/~sjm217/papers/oakland14chipandskim.pdf
46 Reme mber, it’ s “Chip and S ignat ure” in th e U.S., s o it’s eve n weaker te ch rollin g out of the g ate tha n Euro Chip a nd PIN.
47 E MV Adopti on Repor t, EMVCo , June 2014. emvco.com/documents/EMVCo_EMV_Deployment_Stats.pdf
Figure 32.
EMV adopti on rate (as of June 2014)
72%
17%
85%
54%
86%
39%
91%
24%
99%
82%
ASIA PACIFIC
CANADA, LATIN AMERICA, AND THE CARRIBEAN
AFRICA & THE MIDDLE EAST
EUROPE ZONE 2
EUROPE ZONE 1
Terminal
Card
In October 2015, the
chip-and-PIN mandate
goes into full effect in
the United States. A
word of caution—poor
implementations are still
vulnerable to attack.
POINT-O F-SALE
INTRUSIONS
PAYMENT CAR D
SKIMMERS
CRIMEWA RE W EB APP
ATTA CK S
DENIAL-OF-
SERVICE ATTACKS
PHYSIC AL
TH EFT/ LOS S
INSIDER
MISUSE
MISCELLANEOUS
ERRORS
CYBE R-
ESPIONAGE
2015 DATA BREA CH INVES TIGATIONS R EPORT 39
To tag something solely as a malware incident is a common over-generalization and, as we all
know, all generalizations are false. Malware is part of the even t chain in virtually every security
incident (it’s difficult to get a computer virus onto paper records in a box of file folders , though
we suspect Hollywood will fin d some way to do that soon).
Once th ese malevolent by tes invade a system , they surreptitiously usurp existing fun ctionalit y
and sta rt perfor ming action s of their own design. We se e common one-t wo mal-punc hes in a few
places, from mai ntaining per sistence and staging a dvanced at tacks (ref: Cyber-Espionage pa ttern)
to capturing and exfiltrating data (ref: Point-of-Sale Intrusions pattern). This catch-all Crimeware
patt ern represen ts malware in fections wit hin organization s that are not ass ociated with m ore
specialized classific ation patterns suc h as Cyber-Espion age or Point-of-Sale Int rusions.
Crimeware represents malware infections within organizations that are
not associated with more specialized classification patterns.
Like speeches by a politician , Crimeware incident s in our corpus are large in number and sho rt
on details, as these ever yday incident s are less likely to receive a full forensic investigation or
rise to the level of law enforcement involvement. They are also predominantly opportunis tic and
financially motivated in nature.
CRIMEWARE
Most affected industries:
Public, Information,
and Retail
0.6%
0.7%
1.5%
3.3%
4.9%
8.8%
9.5%
10.3%
65.4%
84.4%
CAPTURE STORED DATA
CLIENT−SIDE ATTACK
ROOTKIT
EXPORT DATA
RANSOMWARE
DOWNLOADER
SPYWARE/KEYLOGGER