ArticlePDF Available

Abstract and Figures

As attack techniques evolve, cybersystems must also evolve to provide the best continuous defense. Leveraging classical denial and deception techniques to understand the specifics of adversary attacks enables an organization to build an active, threat-based cyber defense. I t is now widely recognized that traditional approaches to cyber defense have been inadequate. Boundary controllers and filters such as firewalls and guards, virus scanners, and intrusion detection and prevention technologies have all been deployed over the last decade. Nevertheless, sophisticated adversaries using zero-day exploits are still able to get in, and in many cases, establish a persistent presence. We must assume, then, that an adversary will breach border controls and establish footholds within the defender's network , so we need to study and engage the adversary on the defender's turf in order to influence any future moves. A key component in this new paradigm is cyber denial and deception (cyber D&D). The goal of D&D is to influence another to behave in a way that gives the deceiver an advantage, creating a causal relationship between psychological state and physical behavior. Denial actively prevents the target from perceiving information and stimuli; deception provides misleading information and stimuli to actively create and reinforce the target's perceptions, cognitions, and beliefs. Both methods generate a mistaken certainty in the target's mind about what is and is not real, making the target erroneously confident and ready to act. As adversaries' attack techniques evolve, defend-ers' cybersystems must also evolve to provide the best continuous defense. Engineering cybersystems to better detect and counter adversarial D&D tactics and to actively apply D&D against advanced persistent threats will force adversaries to move more slowly, expend more resources, and take greater risks. In doing so, defenders may possibly avoid, or at least better fight through, cyber degradation.
Content may be subject to copyright.
Denial and Deception
in Cyber Defense
Kristin E. Heckman, Frank J. Stech, Ben S. Schmoker, and Roshan K. Thomas, The MITRE Corporation
As attack techniques evolve, cybersystems must also
evolve to provide the best continuous defense. Leveraging
classical denial and deception techniques to understand
the specifics of adversary attacks enables an organization
to build an active, threat- based cyber defense.
It is now widely recognized that traditional
approaches to cyber defense have been inadequate.
Boundary controllers and lters such as rewalls
and guards, virus scanners, and intrusion detection
and prevention technologies have all been deployed
over the last decade. Nevertheless, sophisticated adver-
saries using zero- day exploits are sti ll able to get in and,
in many cases, establish a persistent presence. We must
assume, then, that an adversary will breach border con-
trols and establish footholds within the defender’s net-
work, so we need to study and engage the adversary
on the defender’s turf in order to inuence any future
moves. A key component in this new paradigm is cyber
denial and deception (cyber D&D).
The goal of D&D is to inuence another to behave
in a way that gives the deceiver an advantage, creat-
ing a causal relationship between psychological state
and physical behavior. Denial actively prevents the tar-
get from perceiving information and stimuli; decep-
tion provides misleading information and stimuli to
actively create and reinforce the target’s perceptions,
cognitions, and beliefs. Both methods generate a mis-
taken certainty in the target’s mind about what is and
is not real, making the target erroneously condent
and ready to act.
As adversaries’ attack techniques evolve, defend-
ers’ cybersystems must also evolve to provide the best
continuous defense. Engineering cybersystems to bet-
ter detect and counter adversarial D&D tactics and to
actively apply D&D against advanced persistent threats
wil l force adversa ries to move more slowly, exp end more
resources, and take greater risks. In doing so, defend-
ers may possibly avoid, or at least better ght through,
cyber degradation.
r4hec.indd 36 3/27/15 5:02 PM
APRIL 2015 37
Table 1 shows a two- dimensional
framework to apply D&D techniques.1
The rst dimension re lates to infor ma-
tion (fact or ction); the second relates
to actions or behaviors (revealing or
concealing). The deceiver uses denial
to prevent t he detection of the essential
elements of friendly information (EEFI)
by hiding what’s real, and employs
deception to induce misperception by
using the essent ial elements of deception
information (EE DI) to show what’s fa lse.
The deceiver also has to hide the false
information—that is, the nondisclos-
able deception information (NDDI)—to
protect t he D&D plan, an d additiona lly
show the real information—the nones-
sential elements of friendly information
(NEFI)—to enhance the D&D cover
story. Deception is a ve ry dynam ic pro-
cess, and deception planners will ben-
et from the interplay of techniques
from more than one quadrant in a
deception operation.
Cyber D&D maps this framework
to the cyber domain. Table 2 uses the
D&D methods matrix to show high-
level cyber D&D techniques (combina-
tions of t wo or more tactics), orga nized
according to whether they are fact or
ction, and whether they are revealed
via deception methods or concealed
via denial methods. This “packag-
ing” of tactics can be an indicator of
a sophisticated deception capability
maturity (CM) for counterdeception
(CD) purposes.
The deception chain is a high- level meta
model for cyber D&D operations man-
agement from a life-cycle perspec-
tive. Analogous to Lockheed Martin’s
“cyber kill chain” model,2 the decep-
tion chain is adapted from Barton
Whaley’s 10- step process for planning,
preparing, and executing deception
operations.3 The deception chain
facilitates the integration of three
systems—cyber D&D, cyber intelli-
gence, and security operations—into
the enterprise’s larger active defense
system to plan, prepare, and execute
deception operations.
Deception operations are con-
ducted by a triad of equal partners
working those three systems inter-
actively: cyber D&D planners, cyber-
intelligence analysts, and cyberse-
curity operators. Just as computer
network defense (CND) is not any one
tool but a system that deploys new
technologies and procedures as they
become available, cyber D&D must be
thought of as an active defensive oper-
ationa l campaign, employi ng evolving
TABLE 1. D&D methods matrix.
Deception: Mislead (M)- type
Denial: Ambiguity (A)- type
Facts Reveal facts: Nonessential
elements of friendly
Reveal true information to the
Reveal true physical entities,
events, or processes to the target
Conceal facts (dissimulation):
Essential elements of friendly
Conceal true information from
the target
Conceal true physical entities,
events, or processes from the
Fiction Reveal fiction (simulation):
Essential elements of deception
Reveal false information to the
Reveal false physical entities,
events, or processes to the target
Conceal fiction: Nondisclosable
deception information
Conceal false information from
the target
Conceal false physical entities,
events, or processes from the
TABLE 2. D&D methods matrix with cyber D&D techniques.
Deception: M- type methods
Denial: A- type methods
Facts Reveal facts: Nonessential
elements of friendly
Publish true network information
Allow disclosure of real files
Reveal technical deception
Reveal misleading, compromising
Selectively remediate intrusion
Conceal facts (dissimulation):
Essential elements of friendly
Deny access to system resources
Hide software using stealth
Reroute network trac
Silently intercept network trac
Fiction Reveal fiction (simulation):
Essential elements of deception
Misrepresent intent of software
Modify network trac
Expose fictional systems
Allow disclosure of fic tional
Conceal fiction: Nondisclosable
deception information
Hide simulated information on
Keep deceptive security
operations a secret
Allow par tial enumeration of
fictional files
r4hec.indd 37 3/27/15 5:02 PM
tools, tactics, techniques, and proce-
dures (TTTPs). We believe the decep-
tion chain is a exible framework for
embedding advanced TTTPs in opera-
tiona l campaigns while focusing on an
organizat ion’s mission objectives.
This triad of cyber D&D planners,
cyber- intelligence analysts, and cyber-
security operators is essential for a
threat- based active defense. There are
eight phases in the deception chain.
This initial phase helps enterprise
managers dene the strategic, oper-
ational, or tactical goal—in other
words, the purpose of the deception—
and the criteria that would indicate
the deception’s success.
Collect intelligence
In the next phase, D&D planners
dene how the adversary is expected
to react to the deception operation.
This is done in part through the plan-
ners’ partnership with cyber intelli-
gence to determine what the adver-
sary will obser ve, how the adversary
might interpret those observations,
how the adversary might react (or
not) to those observations, and how to
monitor the adversary’s behavior. This
intelligence will help planners during
the last two phases (monitor and rein-
force) to deter mine whether t he decep-
tion is su cceeding. One i nternal sou rce
of intelligence is intrusion campaign
analysis.4 Broadly speaking, an intru-
sion campaign is a framework that
combines all the related information
about a particular intrusion into a set
of activities.
Threat- sharing partnerships are
another source of cyber intelligence
and mi ght involve gove rnment , private
industry, or non prot organizations.
For example, the Structured Threat
Information eXpression (STIX; http:// and Trusted Auto-
mated eXchange of Indicator Infor-
mation (TAXII;
systems, sponsored by the Oce of
Cybersecurity and Commu nications at
the US Department of Homeland Secu-
rity, provide a structured format for
defenders to share threat indicators in
a manner that reects the trust rela-
tionships inherent in such transfers.
STIX is a community- driven language
used to represent structured cyber-
threat information, and TAXII enables
the sharing of information across
organization and product boundaries
to detect and mitigate cyber threats.
A threat seen by one partner today
might be the threat facing another
partner in the near future. All of these
sources of cyber intelligence can aid
D&D planners in assessing an adver-
sar y’s cyber D&D CM , which in t urn sup-
ports the development of an appropri-
ately cu stomized cyber D&D operation.
Design cover story
The cover stor y is what the defender
wants the adversar y to perceive and
believe. The D&D planner will con-
sider the critical components of the
operation, assess the adversary’s
observation and analysis capabilities,
and develop a convincing story that
“explains” the operation’s components
observable to the adversary but mis-
leads the adversary as to the meaning
and sig nicance of t hose observ ations.
The D&D planner will decide what
information must be hidden (the EEFI
and the NDDI) and what information
must be created and revealed (the
EEDI and the NEFI). The D&D meth-
ods matrix in Table 1 aids planners
by capturing the true and false infor-
mation that must be revealed or con-
cealed t o make the dece ption operation
eective. The planners and cybersecu-
rity operators must decide what infor-
mation “belongs” in the four cells of
the matrix and get buy- in from enter-
prise managers for the deception goals
and cover story.
In this phase, D&D planners analyze
the characteristics of the real events
and activities that must be hidden to
support the dece ption cover story, iden-
tify the corresponding signatures that
would be observed by the adversary,
and plan to use denial tactics (such as
masking, repackaging, dazzling, or
red agging3) to hide the signatures
from the adversary. Planners also ana-
lyze the characteristics of the notional
events and activities that must be por-
trayed and observed to support the
cover stor y, identify corresponding sig-
natures the adversary would observe,
and plan to use deception tactics (such
as mimic, invent, decoy, or double
play3) to mislead the adversary.
In short, D&D planners turn the
matrix cell information into opera-
tional activities that reveal or conceal
the key information conveying the
cover story. These steps must be coor-
dinated with security operations so
that they are as realistic and natural
as possible, and the deception should
allow the adversary to observe real
operational events that support the
co ver st ory.
In th is phase, D&D pla nners desig n the
desired eect of the deception opera-
tion and explore the available means
and resources to create the eect on
the adversary. This entails coordina-
tion with security operations on the
timing for developing the notional
and real equipment, stang, training,
r4hec.indd 38 3/27/15 5:02 PM
APRIL 2015 39
and other preparations to support the
deception cover story.
If the deception and real operational
preparations can be synchronized and
supported, the D&D planners and secu-
rity operations must coordinate and
control a ll relevant pr eparations s o they
can consistently, credibly, and eec-
tively e xecute the dece ption cover stor y.
D&D planners work with cyber intel-
ligence and security operations to
monitor and control the deception
and real operations. This entails mon-
itoring both friendly and adversary
operational preparations, carefully
watching the observation channels
and sources selected to convey the
deception to the adversary, and mon-
itoring the adversary’s reaction to the
“performance,” that is, the cover story
execution. These targeted channels
must remain open to the adversary,
convey the planned deception, and be
observed by the adversa ry.
If cyber intelligence on the adversary
indicates that the deception operation
does not seem to be “selling” the cover
story to the adversary, the D&D plan-
ners may need to reinforce the cover
story through additional deceptions,
or to convey the decept ion operation to
the adversary through other channels
or sources. The planners may have to
revisit the rst phase of the deception
chain, execute a backup deception, or
plan another operation.
Malicious actors follow a common
model of behavior to compromise
valuable information in a target net-
work. Attackers generally employ a
cyberattack strategy, divided into the
six phases described below, called the
cyber kill chain or kill chain.2 Like the
cyber kil l chain, the decept ion chain is
not alway s linear. Prog ression thr ough
the phases can be recursive or disjoint.
One run of the kill chain models a sin-
gle intrusion, but a campaign span-
ning multiple engagements builds on
previous results and omits phases as
necessary. Similarly, D&D planners
and cybersecurity operators w ill selec-
tively r un throug h the deception c hain
to achieve their goals.
The deception chain is also appli-
cable at each phase of the cyber kill
chain, as illustrated by the follow-
ing hypothetical deception operation
goals associated with each kill chain
phase. Later, we describe two case
studies that will further demonstrate
the interplay of the kill chain and the
deception chain to defend against
intrusions. Phase names in the case
studies are italicized to emphasize the
interconnectedness of the two chains.
Recon: If defenders are aware
of adversarial reconnaissance
eort s, provide the adversary
with a set of personae and a Web
footpr int for defensi ve targeting
eort s in the delivery pha se. Note
that de ception operations can
be used to i nuence adversary
action s in futur e kill cha in phases.
Weaponize: Making the adver-
sary (wrongly) feel cer tain about
an organization’s vulnerabili-
ties, defense posture, or capa-
bilities could enable the orga-
nization to recognize or defend
against the adversary’s weapon-
ized payload. If the recon phase
was successful, the adversary
will attempt to deliver the weap-
onized payload to one or more of
the false personae.
Exploit: Recognizing exploita-
tion attempts, defenders may
redirect the adversary to a
honeypot environment, which
appears to be part of a network
that contains valuable infor-
mation but is actually isolated
and monitored by defenders.
The goal is to conceal all hon-
eypot “tells” or indicators to
delay the adversary.
Control: When the adversary
has “hands on keyboard” access,
provide the adversary with a
high- interaction honeypot with
a rich variety of information,
designed with the D&D plan-
ners, to help identify the adver-
sary’s motives, intentions, and
capability maturity.
Execute: Slow the adver-
sary down to collect cyber
Maintain: Keep up the appear-
ance of realism in a high-
interaction honeypot by adding
or retiring false personae, as
well as maintaining existing
personae and their “pocket
litter,” such as les, email,
password change history, login
history, and browser history.
These examples also show that
there may be a need for more than
one deception operation during a sin-
gle intrusion.
In 2013 the Syrian Electronic Army
(SEA) attacked a number of news agen-
cies, compromising user accounts and
r4hec.indd 39 3/27/15 5:02 PM
defacing public websites via stolen
access. This group prioritized speed
over subtlety, sending hundreds of
emails eliciting user account creden-
tials.5 The attacks were eventually
mitigated with an organization- wide
password reset along with second-
factor authentication for sensitive
information. Victims in this case
could have leveraged D&D to both
prevent compromise and allow faster
response to the at tack.
To foil the recon phase of this intru-
sion, nonex istent em ployees wit h plau-
sible email addresses could be placed
on public- facing websites. These
mailboxes ser ve to solicit unwanted
messages and proactively notify net-
work defenders of attempts to target
publicly visible employees. Defenders
would prepare these false targets while
reinforcing their eectiveness, execut-
ing on the at tacker’s intr usion attemp t,
and monitoring the results.
The attackers gained remote access
to internal user accounts, catapulting
them past the exploit phase. Ta rget-
ing specic information rather than
trying to maintain long- term access
during the control phase, the SEA
quickly sent another round of internal
emails soliciting access to the content
production network. Several hours
later, news headlines were modied
with propaganda from the attackers.
Implementing an internally hosted
honeypot to emulate the news agen-
cy’s content production system may
have alerted security teams to the
SEA’s attack. Because the only purpose
of these systems is to collect suspi-
cious requests, it should be assumed
that any connections to these systems
are malicious.
Another way to detect attackers
who are consolidating and expanding
their access involves seeding internal
systems with tripwire user accounts.
By creating plausible “administrator”
or “maintenance” accounts, defenders
can know a system is compromised as
soon as the login attempt is denied.
The SEA’s goal was to modif y news
content during the execute phase. At
this point in the attack, defenders
have to assume that an attacker is as
threatening as a malicious insider.
Wary defenders can attract attempts
to access sensitive information by cre-
ating tantalizing internal documents
that are visible but not accessible. In
this case, the news agency could have
prepared headlines that an adversary
would nd interesting, and log any
attempts to view or modify that inter-
nal data.
Another intrusion group, active since
at least 2006 and identied as APT1
by investigators in 2013, used tactical
D&D to prevent detection and achieve
their mission.6 Over the course of
months and years, their victims suf-
fered intellectual property theft and
system compromises.
During the exploit phase, APT1
attempted to social- engineer indi-
viduals with malicious emails from
spoofed senders related to the tar-
get’s area of interest. APT1 crafted
and attached documents to emails to
exploit targets, which avoided email
scanning tools by appearing legiti-
mate. Once they were able to execute
on an infected system, their malware
mimicked legitimate system services
to evade detection.
The attackers leveraged a blend of
publicly a vailable m alware a nd custom
tools to control infected systems using
a network of pr ox ies and compromised
servers. Several tools implemented
covert channels that mimicked cli-
ent trac such as that from chat and
Web services, further decreasing the
chance of detection. To maintain access
to target networks, administrative
credentials were used to establish an
entrenched foothold and obtain sensi-
tive information.
Given that APT1’s objective was to
collect valuable data from a large num-
ber of targets over an extended period
of time, f uture mitigation eorts would
need to apply to a broad range of com-
panies and government agencies.6 One
mitigation strategy might be to prepare
for intrusions by building false assets
that appear realist ic and re- engi neering
real assets to look like decoys.
After APT1 had compromised a
target, investigators could monitor
activity related to those assets and
share lessons learned with partner
organizations. Threat intelligence
on the attacker’s behavior and tools
could then be used to reinforce the e-
cacy of future deception operations.
Defenders would leverage this intel-
ligence and be better able to execute a
plan whose purpose is to inuence and
manipulate where and how the adver-
sary can operate.
Like any other capability introduced
into an organization, cyber D&D must
be carefully coordinated and managed
to achie ve the desire d results. T he most
critical facets are the maturity level
and the overal l management model.
The cyber D&D maturity level man-
ifest s in the people, ser vices, process es,
and technologies specically enabled
to conduct cyber D&D operations; key
indicators and metrics involve the
degree t o which cyber D& D capabilit ies
are systemized and optimized. A cyber
r4hec.indd 40 3/27/15 5:02 PM
APRIL 2015 41
D&D capability maturity model pro-
vides a coherent blueprint that orga-
nizations can use to assess, measure,
and increase the maturity of their cur-
rent cy ber D&D operation s and develop
specic cyber D&D innovations. It can
also help organizations characterize
not only their own cyber D&D CM, but
also that of their adversaries by pro-
viding obser vable and often measur-
able indicators of specic cyber D&D
capabilities. For example, organiza-
tions with a level 1 initial CM have ad
hoc and chaotic processes that can be
readily anticipated and countered and
so fail to mislead the adversary or to
attain the deception objectives; they
lack the ability to correctly character-
ize an adversary’s response. In con-
trast, organizations with a level 4 or 5
CM have processes that are repeatable,
customized to individual adversaries,
and not obvious. They also incor porate
interdomain data and anticipate the
adversary’s response to decept ions.
The cy ber D&D management model
represents the overall approach to
managing cyber D&D capabilities and
operations from the perspectives of
capabi lity and op erations an d services.
The operational processes used to con-
duct cyber D&D operations constitute
another salient aspect of life- cycle
management. In particular, cyber
D&D must function in concert with
the organization’s overall defensive
operations and must support cyber
defense. The preparations undertaken
to launch and manage D&D capability
must encompass coordination among
people, services, processes, and tech-
nology development and deployment.
The last two facets of life-cycle
management include observables and
indicators as well as related metrics.
These observables and indicators vary
with the life-cycle of D&D operations,
but should give the cyber D&D team
insights into the progress and eec-
tivenes s of its activi ties. Organi zations
must establish a metrics framework to
quantitatively and qualitatively mea-
sure and track the various observables
and indicators, and to r un analytics on
them to enable higher- level inferences
about adversary TTTPs.
Organizations may benet from
a spiral D&D life-cycle management
process that iteratively drives toward
increasing overall cyber D&D capabil-
ity eectiveness th rough higher matu-
rity and continuous process improve-
ments (see Fi gure 1). This t ype of model
helps an organization assess risks and
eectiveness with each iteration of
the spiral while promoting agile and
rapid prototyping as well as tuning of
D&D techniques a nd services based on
observed outcomes.
Any attempt to incorporate cyber
D&D into active cyber defense must
sta rt with e stabli shing c lear and ac hiev-
able progra m goals. At a min imum, the
planning phase should include estab-
lishing D&D program goals and devel-
oping training curricula, cyber D&D
TTTPs, cyber D&D best practices and
standards, and cyber D&D metrics.
In the implementation phase, the
organization will start to plan based
on the goals and actions from the pre-
vious phase. The plan must address
both the “what” (what artifacts are
needed, such as tools and code bases,
threat data, shared repositories, and
metrics databases) and the “how.
Next, the organization must deploy
and execute cyber D&D TTTPs, ser-
vices, and supporting processes in a
target environment. The target envi-
ronment could be a synthetic environ-
ment such a s a honeynetwork or a hon-
eypot, the real cyber infrastr ucture, or
some combin ation. In accorda nce with
the spiral methodology, this approach
to life-cycle management calls upon
the organization implementing the
cyber D&D program to iterate through
rapid prototypes of cyber D&D with
each spi ral. At each iteration, the orga-
nization must evaluate the risks and
eectiveness of t he current prototype.
Post- deployment analysis, the last
phase in the spiral, has three essential
elements: outcome analysis, process
improvements, and feedback. Out-
come analysis centers on the overall
outcome of the c urrent spiral, address-
ing questions such as:
• Tools
• Threat data
• Shared repositories
• Metrics databases
• Fine-tune deployments
• Monitor observables
• Collect eld reports
• Collect metrics
• Outcome analysis
• D&D improvements
• Feedback to planning
Assess …
Assess …
Increasing maturity
of cyber D&D
people, processes,
and techniques
Revise plan for
next iteration
• Establish D&D goals
• Training curricula
Cyber D&D TTTPs
Best practices and
• Cyber D&D
FIGURE . Overview of a spiral cyber denial and deception life-cycle management pro-
cess. The four stages (plan, implement, deploy and execute, and post- deployment anal-
ysis) are conducted iteratively with an assessment of the effectiveness of each stage and
a revision plan after each iteration. This life-cycle support model increases the maturity of
an organization’s cyber D&D capability.
r4hec.indd 41 3/27/15 5:02 PM
How eective were the cyber
D&D techniques developed and
operational ly deployed?
What were the successes and
How well did the organization
manage the total life-c ycle costs
within the spira l?
To answer these questions, the
organization must analyze metrics
data a nd eld report s, using the re sults
to formulate specic D&D improve-
ments in processes, services, and tech-
nologies. This requires careful atten-
tion to managing change for all of the
D&D elements. This phase also gen-
erates feedback to help with the next
iteration of the spiral model. The orga-
nization must share appropriate met-
rics data with relevant parties—those
rening D&D techniques or those
planning the next iteration.
A system of deceptive interactions
between intruders and defenders ben-
ets from both technological devel-
opment and operational coordination
with cyber- intelligence analysts and
security operators.
If enterprises move from a traditional
cyber defense approach to one that
incorporates cyber D&D, what are the
grand challenges and hard techni-
cal problems that need to be solved?
Is there a technology development
roadmap that can guide academic
and commercial developments? These
questions are beyond the scope of this
article, but we propose some research
and technology challenges and pres-
ent them within the framework of the
deception chain.
To inform the purpose and collect
intelligence stages, models for strate-
gic D&D objectives can be built from
both oensive and defensive perspec-
tives. An organization that under-
stands adversarial models can more
quickly tailor defensive operations
to their adversaries’ capabilities and
intent. These actions may be tactical
and operational as well as strategic.
Game- theory models could help ana-
lyze moves and countermoves to pro-
duce promising TTTPs for cyber D&D.
During the cover stor y stage, it is
important to create believable decep-
tion material to attract the adversar y’s
interest. Network and host- based
deception material such as honey-
pots, crafted documents, and email
are referred to as honeytokens. These
are currently crafted manually by
domain- specic operators, but could
be automatically created from a tem-
plate. Related areas include tear- down
and repurposing of honeytokens.
Historical analysis of intrusion
attempts will inform the plan stage.
What moves has the adversary made?
To what extent do these moves signal
the adversary’s intentions? Which
baits h ave worked well, or not? W hat is
the adversar y’s sphere of interest?
Preparing and executing a decep-
tion operation can be made more scal-
able and ecient by leveraging exist-
ing tools and training materials. A
standalone “honeypot in a box” prod-
uct might be developed to adapt to an
organization’s network structure with
a truncated setup time. Novel ways of
training personnel in cyber D&D tech-
nology are also important, such as
simulated intrusions a nd response.
Tracking honeytoken les is help-
ful in the monitoring stage, and can
involve watermarking to alert defend-
ers to an intruder. Correlating activit y
across disparate tripwires may indi-
cate higher- level operational or strate-
gic moves by an adversary.
Technical and operational metrics
are needed to continuously improve
cyber D&D operations in the reinforce
stage. These measure the precision
and believability of honeytokens in
that they attract the intended target
and are readily mistaken as real. Oper-
ational metrics measure how well a
particular cyber D&D technique works
with respect to a specic adversar y and
object ive. The mark of a n eective c yber
D&D tech nique is its abi lity to ent ice the
desired target and exploit its interests,
preventing the adversary from compro-
mising sensitive information.
Implementers can also measure the
extent to which deception interferes
with legitimate users, as well as the
cost of deployment. At a macro level,
organizations also need metrics for
maintenance and development costs
of cyber D&D operations. An ecient
D&D technique is one that can be com-
moditized and produced in a modular
and rapid manner.
r4hec.indd 42 3/27/15 5:02 PM
APRIL 2015 43
Paradigms for cyber defense are
evolving, from static and passive
perimeter defenses to outward-
looking active defenses that engage
and study the adversary. This evolu-
tion opens the door at tactical, oper-
ational, and strategic levels for cyber
D&D to enhance defenses in the
national cyber strategy. Cyber D&D
is an emerging interdisciplinary
approach to cybersecurity, combin-
ing human research, experiments,
and exercises into cyber D&D opera-
tions with the engineering and test-
ing of appropriate cyber D&D TTTPs.
There is currently no national “cen-
ter of gravity” for innovative research,
standards development, shared reposi-
tories, or training curriculum creation
for cyber D&D. Integration is lacking
across the whole of government for pol-
icies and programs in cyber D&D, and
for coordinating the development and
use of cyber D&D defenses. Such a cen-
ter of grav ity would in volve three act ion
areas: standards, methodologies, and
shared repositories; research and opera-
tional coordination; and active defense
cyber D&D enter prise organization.
The rst area focuses on best prac-
tices and standards for cyber D&D:
catalogi ng ongoing oensive and
defensive cyber D&D techniques;
mapping ongoing threats to
appropriate D&D techniques to
support cyber intelligence on
adversaries and intrusion cam-
paign analysis;
conducting outcome analysis of
operational cyber D&D tech-
niques, impacts, and eective-
ness; and
enabling the sharing of stan-
dards and met hodologies
through repositories of tools
and practices to counter cyber
threats with defensive D&D.
The second area focuses on facil-
itating four types of information
strategic, to formulate cyber
D&D policy, programs, and
sponsorship for participants and
research management, to for-
mulate a strategic cyber D&D
research roadmap with cyber
researchers and operational
community participation;
research, to share technical
cyber D&D research; and
transformation, to formulate an
operational roadmap that incor-
porates cyber D&D research
results into c yber defense
Government- sponsored research must
lead the effort and commercial tech-
nology needs to make substantial
Success in the third area requires
an organization to serve as the trusted
intermediary to broker cyber D&D
operational sharing, collaboration,
and networking to manage cyber D&D
KRISTIN E. HECKMAN is a principal scientist and department head for artifi-
cial intelligence and cognitive science at the MITRE Corporation. Her research
interests include denial and deception, cybersecurity, and neuropsychology.
Heckman received a DSc in computer science from George Washington Uni-
versity. Contact her at
FRANK J. STECH is a chief social scientist at the MITRE Corporation. His
research interests include evidence- based intelligence analytics, commu-
nicated influence, and counterdeception. Stech received a PhD in psychol-
ogy from the University of California, Santa Barbara. Contac t him at stech@
BEN S. SCHMOKER is a security researcher at the MITRE Corporation. His
research interests include malware analysis, intrusion detection, and exploit
mitigation. Schmoker received a BS in computer science from the University of
Nebraska Omaha. Contact him at
ROSHAN K. THOMAS is a principal security researcher at the MITRE Corpo-
ration. His research interests include dissemination and access controls, trust
modeling, and modeling advanced persistent threat attacks. Thomas received
a PhD in information technology from George Mason University. Contact him at
r4hec.indd 43 3/27/15 5:02 PM
defenses in the threat landscape. This
organization would also organize col-
laboration of cyber D&D information
exchange at highly detailed technical
levels via technical exchange meet-
ings, shared repositories, and stan-
dards. The organization would sup-
port the identication of near- term
and long- term research needs and
threat- based defense gaps, and help
the whole of gove rnment to meet t hose
needs. Finally, the organization would
foster the development of cyber D&D
training and educational curricula.
Cyber D&D should be part
of the national cyber strat-
egy. To achieve this goal, the
national center of gravity program
must facilitate a strategic “working
group” to begin developing national
cyber D&D plans, formulate US gov-
ernment policies, create programs,
and establish goals and objectives
within the strategy.
1. E. Waltz and M. Bennett, Counter-
deception Prin ciples and Application s
for National Secur ity, Artech House,
20 07.
2. E.M. Hutch ins, M.J. Clopper t,
and R.M. Ami n, “Intell igence-
Driven Computer Network Defense
Informed by A nalysis of Adversa ry
Campa igns and Cyber Kill Chain s,
present ation at the 6th A nn. Int’l
Conf. Information Warfare and Secu-
rity, 2011;
/content/dam/loc kheed/dat a
/corporate/documents/LM- White
- Paper- Intel- Driven- Defense.pdf.
3. B. W haley, “Toward a Gene ral Theory
of Decept ion,” J. Gooch and A. Per-
lmut ter, eds., Militar y Deception and
Strategic S urprise, Routledge, 2007,
pp. 1 88–190.
4. The MIT RE Corp., Threat- Based
Defen se: A New Cyber Def ense Play-
book, 2012;
5. Mandiant, 2014 Threat Re port;
/W P_M- Trends2014_140409.pd f.
6. Mandiant, APT 1: Exposing One o f
China’s Cyber Espionage Units; http://
IEEE Software seeks practical,
readable articles that will appeal
to experts and nonexperts alike.
The magazine aims to deliver reliable
information to software developers
and managers to help them stay on
top of rapid technology change.
Author guidelines:
Further details:
Call for Articles
See ww
for mult imedia content
related to this ar ticle.
r4hec.indd 44 3/27/15 5:02 PM
... Despite traditional approaches such as the use of access control, intrusion detection, malware scanners, firewalls and other prevention technologies have been used in diverse scenarios to curtail document exfiltration attacks on the cyber scene. However, the cyber scene still suffers from numerous attacks, as current traditional approaches have been insufficient at preventing network penetration which subsequently leads to exfiltration and theft of confidential documents and resources [15]. ...
Full-text available
The exponential increase in the compromise of sensitive and intellectual properties alludes to the huge price the global community must pay for the digital revolution we are currently experiencing. This irrefutable reality is a major reason why cybersecurity defences continue to be a pressing and timely area of research. Traditional countermeasures of cyber defence using boundary controllers and filters such as intrusion detection, access controls, firewalls and so on, have proven ineffective. Such measures fail to account for the attacker’s inherent advantage of being increasingly techno-savvy, as well as their persistence in attempting to compromise the security of not only high-value targets, but also the vast pool of oblivious users of technology. The use of decoys and deception is one of the emerging solutions for cyber defence. Leveraging decoys and deception for security pre-date the advent of the digital revolution as centuries have witnessed the military using human decoys to deceive and successfully defeat their adversaries during wars. However, its benefits for reducing cyberattacks in these digital times have not been thoroughly investigated. One of its use requires that fake text documents are positioned in the repository of critical documents in order to mislead and catch hackers attempting to exfiltrate sensitive documents. Current methods of generating fake text documents involve using symbols, junk documents, randomly generated texts. Such approaches fail to capture the empirical and linguistic properties of language, resulting in messages that do not scale well, are not realistic, fail in the context of syntax and are semantically void. Consequently, failing to convince the attackers to believe they are the original messages. This paper presents a Cognitive Deception Model (CDM) based on a neural model which takes an input message and generates syntactically cohesive and semantically coherent independent looking but plausible and convincing decoy messages to cognitively burden and deceive the adversaries. The experimental results used to validate the models, as well as the comparison with state-of-the-art tools, show that it outperforms existing systems.
... The defender may also disrupt, degrade and destroy attacker reconnaissance, but such tactics may be considered engaging in illegal cyber attacks [18,[102][103][104]. The defender may engage in deception, but it can be complex to implement and may be less attainable for all organisations across the EU internal market [105]. The analysis at this phase and in subsequent phases will focus on defender detection and denial. ...
Full-text available
The principal question addressed by this paper is: how adequate are the minimum security objectives of the European Union Cybersecurity Act (Regulation (EU) 2019/881) in assisting organisations in the European Union internal market with resisting and recovering from cyber threats? The question is answered by first identifying the scope of the minimum security objectives. Scope identification, performed through legislative interpretation, reveals an integrated system of security objectives with significant gaps. Second, the minimum security objectives are evaluated within a model of cyber attacks from attack reconnaissance to legal proceedings to reveal further significant gaps. Finally, the minimum security objectives are evaluated within five cyber attack scenarios, reflecting the highest ranking cyber threats to the internal market. The simulation analysis accentuates the findings of the model analysis and identifies further significant gaps. In conclusion, the minimum security objectives are found to be largely inadequate in assisting organisations in the European Union internal market with resisting and recovering from cyber threats. The analysis of the adequacy of the minimum security objectives is timely, as the first European cybersecurity certification schemes are currently being designed.
Full-text available
In the current knowledge economy, knowledge represents the most strategically significant resource of organizations. Knowledge-intensive activities advance innovation and create and sustain economic rent and competitive advantage. In order to sustain competitive advantage, organizations must protect knowledge from leakage to third parties, particularly competitors. However, the number and scale of leakage incidents reported in news media as well as industry whitepapers suggests that modern organizations struggle with the protection of sensitive data and organizational knowledge. The increasing use of mobile devices and technologies by knowledge workers across the organizational perimeter has dramatically increased the attack surface of organizations, and the corresponding level of risk exposure. While much of the literature has focused on technology risks that lead to information leakage, human risks that lead to knowledge leakage are relatively understudied. Further, not much is known about strategies to mitigate the risk of knowledge leakage using mobile devices, especially considering the human aspect. Specifically, this research study identified three gaps in the current literature (1) lack of in-depth studies that provide specific strategies for knowledge-intensive organizations based on their varied risk levels. Most of the analysed studies provide high-level strategies that are presented in a generalised manner and fail to identify specific strategies for different organizations and risk levels. (2) lack of research into management of knowledge in the context of mobile devices. And (3) lack of research into the tacit dimension of knowledge as the majority of the literature focuses on formal and informal strategies to protect explicit (codified) knowledge.
Full-text available
The NIS 2 Directive (2022/2555) of the European Union (EU) identifies the cybersecurity risk management requirements for essential and important entities in EU member states. The principal question we address is, how effective are the cybersecurity risk management measures of the NIS 2 Directive against cyberattacks on essential and important entities in EU member states? It was observed, through statutory interpretation and cyber kill chain model analysis, that the cybersecurity risk management measures of the NIS 2 Directive may be significantly limited in their effectiveness against cyberattacks on essential and important entities in EU member states. The limited effectiveness is primarily due to the narrow scope of the cybersecurity risk management measures, including the lack of specific measures focused on the reconnaissance phase of a cyberattack.
Full-text available
An Extended model for situational awareness in active cyber defense
Full-text available
While network externality refers to all types of (negative or positive) feedback from the market, network effect is usually mentioned only for positive feedback leading to an increase in the value of the network. Each digital service will involve network effects that can influence the evolution of the respective market over time. If positive network effects lead to slow initial adoption of new services, it can be accelerated by initial offering the new service for free. Estimates of the value for networks of various types are used in strategic planning. Network effects can be visualized using undirected networks, where the number of nodes, as a rule, does not exceed the number of connections (links) between them.
Full-text available
Moving target defense (MTD) and decoy strategies, measures of active defense, were introduced to secure both the proactive security and reactive adaptability of internet-of-things (IoT) networks that have been explosively applied to various industries without any strong security measures and to mitigate the side effects of threats. However, the existing MTD and decoy strategies are limited to avoiding the attacker’s reconnaissance and initial intrusion attempts through simple structural mutations or inducing the attackers to a static trap based on the deceptive path and lack approaches to adaptively optimize IoT in consideration of the unique characteristic information by the domain of IoT. Game theory-based and decoy strategies are other options; however, they do not consider the dynamicity and uncertainty of the decision-making stages by the organizational agent related to the IoT domains. Therefore, in this paper, we present a type of organizational deception modeling, namely IoT-based organizational deception modeling (IoDM), which considers both the dynamic topologies and organizational business fingerprints customized in the IoT domain and operational purpose. For this model, we considered the practical scalability of the existing IoT-enabled MTD and decoy concepts and formulated the partially incomplete deceptive decision-making modeling for the cyber-attack and defense competition for IoT in real-time based on the general-sum game. According to our experimental results, the efficiency of the deceptive defense of the IoT defender could be improved by 70% on average while deriving the optimal defense cost compared to the increased defense performance. The findings of this study will improve the deception performances of MTD and decoy strategies by IoT scenarios related to various operational domains such as smart home networks, industrial networks, and medical networks. To the best of our knowledge, this study has employed social-engineering IoT knowledge and general-sum game theory for the first time.
The overlay of technical solutions onto cyber terrain is influenced by traditional security models (e.g., layered defense). However, simple layering of orthogonal defense technologies does not ensure a successful cyber defense. For example, zero-day exploits and private key compromise are examples of vulnerabilities that defeat strong technical solutions. Cyber analysis is therefore key to providing network defenders with a clear structuring of how a system is protected at each phase of an attack cycle. In addition, a coordinated implementation of defensive policies (e.g., deception), represented by secure processes and technology solutions, provides a holistic approach for architecting a Security Operations Center (SOC) that is designed to protect modern enterprise computing frameworks.KeywordsAdvanced Persistent Threat (APT)Attack surfaceLayered defenseCloud computingManaged Service Security Provider (MSSP)Security Operations Center (SOC)Common operational picture (COP)MITRE ATT@CKDamage/reproducibility/exploitability/affected users/discoverability (DREAD)Spoofing/tampering/repudiation/information disclosure/denial of service/elevation of privilege (STRIDE)Cyber security control (CSC)Denial and deceptionBlockchainHoneypotMoving target defense (MTD)Network-based intrusion detection system (NIDS)Zero day
The traditional intelligence cycle of tasking, collection, processing, exploitation, and dissemination (T(C)PED) always included open-source elements to complement technical collections. The rise of the Internet, followed within a decade by the war in Iraq, resulted in a lethal combination of improvised explosive devices (IEDs) and online media. For example, real-time, online, reporting of IED effects in social media resulted in an extension of open-source intelligence (OSINT) to a more evolved cyber intelligence, surveillance, and reconnaissance (ISR). Cyber ISR therefore includes both traditional computer network exploitation (CNE) and social network analysis (SNA), OSINT on the Internet, to track IED groups, from their Internet presence, in countering the insurgencies in Iraq and Afghanistan. Similar uses of cyber ISR included the Bellingcat Team’s 2014 open-source identification of Russian 53rd anti-aircraft personnel that shot down the Dutch MH-17 airliner transiting Ukrainian airspace. Static databases can also be military-grade targets for special operations. For example, a key target in the 2015 Abu Sayyaf raid at the Al-Omar oil fields against Islamic State of Iraq and Syria (ISIS) was the Census Database (DB) that ISIS has repurposed from a Coalition biometric database. Cyber ISR, whether collected by active or passive means, is a key intelligence source for adversary analysis.KeywordsTaskingCollectionProcessingExploitationand dissemination (T(C)PED)Open-source intelligence (OSINT)Improvised explosive device (IED)Computer network exploitation (CNE)Cyber intelligence, surveillance, and reconnaissance (ISR)Social network analysis (SNA)Islamic State of Iraq and Syria (ISIS)Census Database (DB)BellingcatMH-17 IncidentDemocratic National Committee (DNC) emailsWikileaksSupervisory control and data acquisition (SCADA)Mueller ReportOpen SecretsBotnetStuxnet
Conventional network defense tools such as intrusion detection systems and anti-virus focus on the vulnerability component of risk, and traditional incident response methodology presupposes a successful intrusion. An evolution in the goals and sophistication of computer network intrusions has rendered these approaches insufficient for certain actors. A new class of threats, appropriately dubbed the "Advanced Persistent Threat" (APT), represents well-resourced and trained adversaries that conduct multi-year intrusion campaigns targeting highly sensitive economic, proprietary, or national security information. These adversaries accomplish their goals using advanced tools and techniques designed to defeat most conventional computer network defense mechanisms. Network defense techniques which leverage knowledge about these adversaries can create an intelligence feedback loop, enabling defenders to establish a state of information superiority which decreases the adversary's likelihood of success with each subsequent intrusion attempt. Using a kill chain model to describe phases of intrusions, mapping adversary kill chain indicators to defender courses of action, identifying patterns that link individual intrusions into broader campaigns, and understanding the iterative nature of intelligence gathering form the basis of intelligence-driven computer network defense (CND). Institutionalization of this approach reduces the likelihood of adversary success, informs network defense investment and resource prioritization, and yields relevant metrics of performance and effectiveness. The evolution of advanced persistent threats necessitates an intelligence-based model because in this model the defenders mitigate not just vulnerability, but also the threat component of risk.
  • Mandiant
Mandiant, 2014 Threat Report;
Threat-Based Defense: A New Cyber Defense Playbook
  • The
  • Corp
The MITRE Corp., Threat-Based Defense: A New Cyber Defense Playbook, 2012; /default/files /pdf/cyber_defense_playbook.pdf.
APT1: Exposing One of China's Cyber Espionage Units
  • Mandiant
Mandiant, APT1: Exposing One of China's Cyber Espionage Units;