Content uploaded by Adam Caulfield
Author content
All content in this area was uploaded by Adam Caulfield on Oct 10, 2024
Content may be subject to copyright.
SoK: Runtime Integrity
Mahmoud Ammar Adam Caulfield
Rochester Institute of Technology
Ivan De Oliveira Nunes
Rochester Institute of Technology
Abstract—This paper provides a systematic exploration of
runtime integrity mechanisms, such as Control Flow Integrity
(CFI) and Control Flow Attestation (CFA). It examines their
differences and relationships while addressing crucial questions
about the goals, assumptions, features, and design spaces. It
includes examining a potential coexistence of CFI and CFA
on the same platform. Through a comprehensive review of
existing defenses, this paper positions CFI and CFA within
the broader landscape of runtime defenses, critically evaluat-
ing their strengths, limitations, and trade-offs. The findings
emphasize the importance of further research to bridge the
gaps between CFI and CFA, advancing the field of runtime
defenses.
Index Terms—Control Flow Integrity, Control Flow Attesta-
tion, Software Security, System Security.
1. Introduction
Unsafe programming languages like C and C++ are
still prevalent, especially for lower-level system develop-
ment [1]. Memory safety bugs, such as buffer overflows,
are prominent enablers of attacks on programs written in
such languages. Attacks that modify/inject code can be (to
some extent) mitigated by existing defenses. Among them,
Data Execution Prevention (DEP) and Write-Xor-eXecute
(W⊕X) [2] policies can prevent user-space code injection
attempts at runtime. Secure boot can locally enforce boot-
time code integrity (including the integrity of privileged
software, e.g., stage 1 and 2 boot-loaders and kernel) [3],
[4]. Static (i.e., boot-time) Remote Attestation (RA) [5]–[7]
can further convince a remote party of the integrity of the
booted code chain.
On the other hand, code-reuse attacks [8] (exemplified
by Return-Oriented Programming – ROP [9] – and Jump-
Oriented Programming – JOP [10]) need not modify the
installed code, posing a significant threat even in the pres-
ence of the aforementioned defenses. They instead exploit
memory corruption vulnerabilities to trigger out-of-order ex-
ecution of sub-sequences of instructions (known as gadgets)
within a program. This can result in unintended behavior
even when code modifications are prevented. As illustrated
To appear: IEEE Security & Privacy 2025 (S&P 2025).
Title: SoK: Integrity, Attestation, and Auditing of Program
Execution.
in Figure 1, code-reuse attacks can be broadly classified
into two categories: control flow hijacking and data-only
attacks. The former directly corrupts memory storing code
pointers, e.g., return addresses [9] and function pointers [10]
during execution. The latter changes control flow related
data, e.g., loop/conditional variables or counters, without
causing control flow transfers that do not exist in the Control
Flow Graph (CFG) of the target program [11], [12].
Much attention has been devoted to code-reuse attack
mitigations due to their popularity and effectiveness [13]
with several protection and detection mechanisms proposed
in the past few decades [14]. Most notably, Control Flow
Integrity (CFI) mechanisms for both forward and backward
edge protection have been widely recognized as key mit-
igations [15], [16]. Ergo, recent years have seen efforts
to adopt both academic and industry proposals, each with
their own sets of trade-offs [16]–[18]. Nevertheless, only a
few of these proposals, e.g., LLVM CFI [19], have become
available in production compilers, despite known limitations
in terms of granularity and compatibility [18], [20].
On the hardware side, both ARM and Intel have
equipped their latest-generation architectures with new ex-
tensions to assist control flow attack mitigations. Exam-
ples include Pointer Authentication (PA), Memory Tagging
Extension (MTE), and Branch Target Identification (BTI)
features from ARM [21], and the Control Flow Enforcement
Technology (CET) from Intel [22]. While various contem-
porary CFI approaches leverage these extensions in their
designs [23]–[30], gaps still persist [31]–[33].
In a parallel line of efforts, Control Flow Attestation
(CFA) [34]–[36] has been proposed to enable remote Ver-
ifier(s) (Vrf) to ascertain the execution integrity (including
the absence of control flow attacks/violations) of an op-
eration of interest performed by a remote device (called
a prover or Prv). In its ideal form, CFA generates an
authenticated log containing all dynamically defined con-
trol transfers occurring during the execution of an attested
operation of interest1. Nonetheless, similar to CFI, coarser-
grained CFA approaches are also possible [37], establishing
trade-offs between the accuracy of CFA evidence and per-
formance, especially when overheads related to storage and
transmission of said evidence to Vrf are a concern (e.g.,
when Prv is a resource-constrained embedded platform).
1. Note that some CFA variations aim at enabling continuous verification
of all control flow transfers on Prv, rather than focusing on individual
operations of interest. For more details, see Section 2.3.
arXiv:2408.10200v2 [cs.CR] 20 Sep 2024
Memory corruption-
based attacks
Code Reuse attacks
Code Injection
attacks
Data-only
(a.k.a non-control data)
attacks
Control-flow hijacking
(a.k.a control-data)
attacks
Direct Data
Manipulation (DDM)
Data Oriented
Programming (DOP)
Return Oriented
Programming (ROP)
Jump Oriented
Programming (JOP)
Figure 1. Classes of memory corruption-based attacks to software integrity.
Notably, not all CFA techniques guarantee that CFA
evidence is received by Vrf. While this is sufficient for
attestation, wherein a Vrf would not trust responses/values
received from Prv unless accompanied by CFA evidence, it
does not support secure runtime auditing [38]. The latter
aims to ensure that CFA evidence always reaches Vrf,
even if Prv is compromised, allowing for attack root cause
analysis and appropriate remediation.
CFI and CFA goals can be viewed as runtime analogs
of boot-time code integrity guarantees offered by secure
boot vs. static RA. While CFI enables in loco detection
of control flow violations (typically triggering exceptions
when detected), CFA concerns providing remotely verifiable
(unforgeable) evidence of the control flow path followed
by an operation of interest executed by a Prv device, thus
enabling control flow path analysis by a remote Vrf.
1.1. Motivation & Intended Contributions
Although CFI and CFA approaches exist due to the
common threat of control flow attacks, their different goals,
designs, and capabilities are not yet systematically discussed
in the literature. Naturally, the current lack of systematiza-
tion prompts questions such as:
[Q1] How do CFA and CFI goals differ?
[Q2] What are the assumptions, features, and design spaces
of CFI vs. CFA, as well as similarities and differ-
ences?
[Q3] What makes CFA different from remotely attesting ad-
herence to a CFI policy? Could CFA uncover attacks
that CFI would not (and vice-versa)?
[Q4] Could/should CFI and CFA coexist on the same plat-
form?
Additionally, there is often confusion surrounding the
terminology in the context of control flow-related mecha-
nisms (e.g., prevention vs. local detection vs. remote detec-
tion; runtime attestation vs. runtime auditing; fine-grained
vs. coarse-grained approaches; etc.) and their relationship
to memory safety and compartmentalization defenses. This
ambiguity makes it challenging to precisely understand the
guarantees provided by each approach. Therefore, it be-
comes crucial to delve into such nuances to clearly grasp
the benefits of each approach and their positions within the
broader landscape of runtime software defenses.
In this paper, we explore the relationships and differ-
ences between CFI and CFA by systematically examining
the fundamental goals and trade-offs associated with both
approaches. Towards this goal, we present a systematic
review of existing runtime defenses to provide context
and position CFI and CFA within the broader landscape
of execution integrity defenses. Subsequently, we classify
recent work in CFI and CFA according to design choices,
weighing their advantages and disadvantages and aiming to
grasp a better understanding of existing limitations. Finally,
we discuss missing links between CFI and CFA and future
research avenues.
1.2. Literature Selection Criteria
The selection criteria for the inclusion of academic
papers or industrial proposals in our systematization are as
follows:
●We aim to include all available literature on CFA due
to the manageable number of existing proposals (except
unintended oversights).
●Given the extensive volume of CFI proposals, we use the
following criteria for selection within the past 10 years:
●Papers published in prestigious security-focused con-
ferences such as USENIX Security, IEEE S&P, CCS,
and NDSS.
●Papers with more than 100 citations, indicating their
broad influence in subsequent work.
●Papers or proposals adopted in mainstream compilers
or hardware architectures.
1.3. Scope & Related Systematizations
Memory safety [39]–[42] approaches aim to eliminate
or reduce vulnerabilities that could lead to control/data flow
attacks and data corruption during software development,
i.e., before deployment. These typically work in two ways.
First, memory safety can be a built-in security feature of
programming languages such as Go and Rust. Rust [43],
for instance, utilizes static compile-time analysis to optimize
safety checks and memory management decisions, such as
bounds check elimination while incorporating mechanisms
(e.g., value ownership and borrowing) to ensure temporal
safety. Second, memory safety can be obtained as memory-
safe dialects of memory-safe programming languages. An
example of this is Checked C [44], which augments C with
spatial memory safety checks introduced at compilation time
and runtime. This involves refining the C type system with
safe pointer and array types with stricter usage models. Even
so, this approach provides partial protection and presents
compatibility challenges with legacy software.
A third class involves fortifying and isolating code with
runtime checks [45], including compartmentalization [46],
software fault isolation [47], and memory layout random-
ization [48]. While the focus here is on run-time violation
detection and isolation, some texts still refer to this third
class as memory safety techniques. In general, there is no
consensus on whether the term should be restricted for the
first two cases or include this third class (and more), lead-
ing to general confusion about whether the term “memory
safety” should encompass [39].
Terminology aside, our work focuses on systematizing
and discussing the relationship between runtime integrity
enforcement [15] and runtime attestation [34] methods used
after software deployment (hence “runtime”). This is com-
plemented by existing systematizations focused, for in-
stance, on memory safety or compartmentalization. Related
to our work, Szekeres et al. [14] provide a general model
of memory corruption attacks, which serves as a foundation
for identifying the different policies that can prevent such
attacks. Song et al. [49] offer a systematic overview of
sanitizers, emphasizing their role in uncovering security
vulnerabilities. Larsen et al. [50] present a comprehensive
and unified overview of software diversification approaches,
highlighting their inherent trade-offs. Burow et al. [51]
conduct a thorough evaluation of the design space of shadow
stacks, considering performance, compatibility, and security
aspects. Contrary to the aforementioned efforts, this SoK
focuses on shedding light and evaluating recent CFI and
CFA methods as well as their relationship and differences.
2. A Lighting Tour
This section reviews code-reuse attacks and existing
defenses highlighting the role of CFI and CFA in this
landscape.
2.1. Code Reuse Attack Background
Figure 1 illustrates the general classes of memory
corruption-based attacks. Code-reuse attacks are further
classified into control-flow hijacking and data-only attacks.
At a high level, their difference lies in the former performing
control flow transfers that do not exist in the legitimate
CFG of the target program and the latter causing unintended
transfers via edges that exist in the CFG. The two cases
are depicted in Figure 2. Return Oriented Programming
(ROP) [9] and Jump Oriented Programming (JOP) [10] are
the two main categories of control-flow hijacking attacks.
Both ROP and JOP stitch out-of-order sub-sequences of
instructions, so-called gadgets, to modify the control flow
path of the target program and perform a malicious behavior.
As their names indicate, ROP corrupts backward edges,
targeting gadgets that end with return instructions. JOP
corrupts forward edges, targeting gadgets that end with
indirect jump or call instructions.
Data-only attacks can be classified into two categories
based on the type of non-control data manipulated [52]:
Direct Data Manipulation (DDM) and Data-Oriented Pro-
gramming (DOP). DDM attacks can be as simple as illegally
modifying the value of a variable [11]. DOP attacks [12]
Illegal transfer using existing edge
Figure 2. Control-flow hijacking vs. Data-only attacks on a CFG.
on the other hand aim to perform expressive computation
(often Turing-complete) allowing the by chaining carefully
selected DOP gadgets so that the gadget chain exists as a
valid path to the CFG. This is typically accomplished by
corrupting control data, such as variables that define paths
in conditional statements and loop counters.
2.2. Runtime Defenses
Figure 3 depicts the relationship between memory vul-
nerabilities, runtime exploits, and associated defences. The
primary categories of runtime defenses against memory
corruption-based attacks are illustrated in ( 2-5), which
have been adapted from [14] and [45] respectively. Software
testing tools, such as sanitizers [49] and fuzzers [53], act as
a front-line defense in the pre-deployment phase, where the
main goal is to find all possible vulnerabilities and fix them.
Boot- and Load-time software verification mechanisms, such
as Secure Boot [54], Measured boot [55], and Load-time
attestation (e.g., the Linux Integrity Measurement Architec-
ture (IMA) for user-space software [56]), are deployed as
a primary shield in the post-deployment phase to prevent
booting/loading of non-authentic software. However, the
presence of memory corruption vulnerabilities at runtime
remains a concern. Therefore, several runtime defenses have
been proposed, each targeting specific steps of the attack
process. Considering the five distinct attack steps ( 1-5))
outlined in the general model of memory corruption attacks
from [14], Figure 3 illustrates which class of defenses can
counter each type of exploit and at which step. Note that,
as information leaks are not integrity violations, they are
not considered in the taxonomy shown in Figure 1. In the
following, we summarize the individual attack steps and
relevant defenses:
1Memory Vulnerability: Finding and exploiting a
memory corruption vulnerability is an essential requirement
for any of the attacks considered in Figure 3. Illegal access to
a memory address, whether to read, write, or both, depends
on the particular vulnerability. We note that vulnerabilities
that enable read-only access are (by themselves) not suffi-
cient to corrupt execution integrity of the target program.
2Integrity Violation: Exploiting vulnerabilities that
grant illegal write access enables adversaries to tamper
W X is a main requirement
Software
Testing
0Software
Sanitization SanitizersFuzzers
Other testing
methods
Out-of-bounds
pointer
Unintended
Read
Format string
vulnerability
Dangling
pointer
Unintended
Write
Memory
Safety
1Memory
Vulnerability
Software
Compartmentalization
Modify non-
control-data
Modify
control-data
Code-pointer
Separation
Modify code Code
integrity
Software
Diversification
Inject attacker-
controlled data
Inject attacker-
controlled address
Address-space
Randomization
Inject attacker-
controlled code
Instruction
Set
Random.
Data
Integrity
Use of corrupt
data
Return to
corrupt address
Control
Flow
Integrity
Runtime
Attestation
2Integrity
Violation
3Exploit
Payload
4Exploit
Dispatch
5Exploit
Execution
Exfiltrate data
Interpret
exfiltrated data
Indirect jump to
corrupt address
Execute data-
oriented gadget
Data Flow
Attestation
Execute injected
code fragement
Execute gadget /
code fragment
Control
Flow
Attestation
W X
Execute
modified code
Binary
Attestation
Information leaks Code-injection
attacks
Control-flow
hijacking attacks Data-only attacks
Pre-deployment
Post-deployment
Software
Attestation
0Software
Verification
Boot-time attestationLoad-time Attestation
Before Runtime
At Runtime
Roots of Trust (RoTs)
Builds on top or leverages RoTs
Pointer
Integrity
Data
Space
Random.
Data Flow
Integrity
Secure boot
Figure 3. A high-level overview of defenses against memory corruption-based attacks with a focus on runtime defenses (expanded based on [14] and [45]).
with the various aspects of a program, including the (i)
program’s code (instructions in memory), (ii) control data
(e.g., return addresses and function pointers), and (iii)
non-control data (e.g., data variables and pointers). Isolation
and compartmentalization mechanisms play a crucial role
in enforcing access control permissions to mitigate integrity
violations. These mechanisms restrict the targets that ad-
versaries can access, often thwarting attacks at an early
stage or preventing their spread to the rest of the system.
For instance, access control mechanisms like the AArch64
(Un)Privileged Execution Never feature [57] make it signif-
icantly challenging to directly corrupt program code [58].
Code Pointer Integrity (CPI) [59] is a security mechanism
that safeguards all code pointers and data pointers pointing
to code by storing them in an isolated memory area. Code
Pointer Separation (CPS) [59], a variant of CPI, isolates only
code pointers while leaving the protection of data pointers
to other measures for performance reasons. Software Fault
Isolation (SFI) [47], memory tagging [60], and capability-
based architectures (exemplified by CHERI [46]) operate at
various granularity levels to isolate larger software compo-
nents into distinct protection domains. These mechanisms
limit the consequences of attacks that exploit memory vul-
nerabilities by confining them within specific compartments.
3Exploit Payload: If previous defenses are bypassed,
adversary can inject payloads to manipulate the data and
control flows of the target program. In general, the pay-
load injection process requires knowledge of the program’s
memory layout. In response, software diversification aims
to impede the crafting of reusable exploits by introducing
uncertainty through randomization. For instance, Address
Space Layout Randomization (ASLR) [48] and Instruction
Set Randomization (ISR) [61] are lightweight defenses that
randomize memory layouts, making payload injection more
challenging for control-flow hijacking and code-injection at-
tacks. Additionally, Data Space Randomization (DSR) [62]
can complicate data-only attacks. While these techniques
offer probabilistic guarantees, they significantly raise the
difficulty of runtime attacks.
4Exploit Dispatch: To successfully launch sophisti-
cated attacks, adversary needs to divert the target program
to operate on the injected payload. This step is crucial for
expressive code-reuse attacks such as ROP, where the attack
is initiated by manipulating the stack pointer to execute
a sequence of selected gadgets in a predetermined order,
with each gadget returning to a specific memory address
in the following gadget to implement the desired attack
behavior. Control Flow Integrity (CFI) [15] and Data Flow
Integrity (DFI) [63] are two commonly used defenses to
ideally detect and block control-flow hijacking and data-
only attacks at this stage. These techniques involve imple-
menting and enforcing policies that must be followed during
program execution. However, contemporary literature shows
that maintaining gap-free policies is inherently challenging,
leaving potential exploit opportunities [20], [64], [65].
5Exploit Execution: As discussed above, ensuring
the complete integrity of a victim program can be chal-
lenging. As a result, runtime attestation mechanisms have
been proposed as a last line of defense to enable remote
verification of code and execution integrity in a trustworthy
manner. These mechanisms aim to detect tampering with
code or violations of control/data flow. In addition, they also
provide means to convince a remote party of the execution
integrity of the target program during an operation of interest
and enable in some cases auditing root cause vulnerabil-
ities in case of exploits. In addition to measures such as
W⊕X[66] and DEP [2] policies, which are deployed to
prevent code-injection attacks, remote attestation approaches
of code binary [7], [67]–[69] are widely regarded as essential
for providing remotely verifiable evidence of binary integrity
at runtime. At any time during execution, they can be used to
attest that the code (including instrumentation in it, e.g., as
required by several CFI/CFA defenses) remains untampered.
RA becomes paramount for most-privileged code (and single
privilege systems, e.g, bare-metal micro-controllers) where
full disablement of runtime code modifications implies the
inability to perform remote software updates [70]. Control
Flow Attestation (CFA) [34], [36] and Data Flow Attestation
(DFA) [71]–[73] approaches have emerged to specifically
detect and audit code-reuse attacks, enabling trustworthy
remote verification of control-flow and data-flow integrity.
As shown in Figure 3, attestation mechanisms build atop
Roots of Trust (RoTs) as a foundation to provide trustworthy
evidence of system/software state that can be remotely ver-
ified. For instance, RoTs are utilized in several key aspects
of attestation mechanisms. They serve as a foundation for
securely measuring system state and/or installed software
(RoT for Measurement), securely storing attestation secret
keys (RoT for Storage), and signing attestation reports (RoT
for Reporting). Examples include Trusted Platform Mod-
ules (TPMs) [74], DICE [75], hardware extensions in Intel
SGX-capable processors [76], and ARM TrustZone-based
RoTs [77], [78], and various academic proposals, such as
Keystone [79] and Byotee [80], among others.
2.3. CFI & CFA: Definitions & Threat models
Considering their prominent status as actively researched
defenses, the rest of the paper systematically explores CFI
and CFA techniques, shedding light on their underlying
principles, relationships, trade-offs, and other crucial aspects
to provide insights into unclear considerations for adoption
in real-world scenarios.
Control Flow Integrity (CFI)
Originally proposed by Abadi et al. [15], CFI is a policy-
based mitigation against control-flow hijacking restricting
the execution path of a program at runtime based on a
pre-computed CFG. In principle, enforcing CFI on a target
program involves:
●Generation of an over-approximated CFG, denoted ≈CFG.
●Instrumentation of the target’s indirect branch instructions
with runtime checks, so-called Inline Reference Monitors
(IRMs), which enforce the control flow to be compliant with
≈CFG.
≈CFG can be generated statically (as proposed origi-
nally [15]) or dynamically, as seen in following propos-
als [81], [82]. When ≈CFG≡CFG, it is generally difficult
for an adversary to manipulate the control flow to alter
the intended semantics of the target program without de-
tection by the instrumentation checks. However, statically
determining strict CFGs for complex programs remains an
open challenge [83], leading many pratical approaches to
over-approximate ≈CFG.
Control Flow Attestation (CFA)
CFA focuses on producing unforgeable evidence of the
control flow path followed by an executable on a prover
device (Prv). This evidence allows a remote verifier (Vrf)
to assess the trustworthiness of execution and its outcomes.
CFA is an (on-demand) challenge-response protocol, as
shown in Figure 4.
A CFA instance starts with Vrf sending a request con-
taining a cryptographic challenge to Prv. Upon receiving the
request, Prv must execute the operation requested by Vrf
(either specified implicitly or explicitly within the request).
During the execution of the requested task, an RoT in Prv
must ensure that an authenticated log (CFLog ) containing a
representation of the control flow path executed during the
operation is built. After execution completes, the RoT com-
putes an authenticated integrity measurement (e.g., using a
MAC or signature) over the received challenge, CFLog , and
the application’s binary to produce a response token (REP).
Figure 4. Typical CFA Interaction
Finally, Prv transmits REP to Vrf along CFLog . Given the
need to securely store the secret used to authenticate REP
even when Prv is potentially compromised, RoT imple-
mentations typically involve some form of secure hardware
support.
Upon receiving RE P,Vrf can use this evidence to deter-
mine if Prv executed the expected correctly through a valid
control flow path. Further, when REP shows an invalid path,
Vrf can analyze the anomalous evidence to determine its
cause and potentially remediate it.
Existing CFA techniques (see Section 3) use either (1)
binary instrumentation along with Trusted Execution Envi-
ronment (TEE) support; or (2) custom hardware modifica-
tions to generate CFLog by detecting and saving each branch
destination to a dedicated and protected memory region. For
techniques that use binary instrumentation, a pre-processing
phase modifies the binary so that branch instructions are
prepended with additional calls to a TEE-protected trusted
code. Once called, the trusted code appends CFLog the
current branch destination. In hardware-based techniques,
custom hardware interfaces with the CPU to detect branches
and record their destinations to protected memory.
CFI/CFA Coverage
Granularity: The granularity of CFI/CFA mechanisms
refers to the detail in which a particular control flow trans-
fer is monitored/checked. In this work, we categorize the
granularity of a specific technique as either coarse-grained
or fine-grained. Since CFI and CFA have different security
goals (local detection/prevention vs. providing remote evi-
dence enabling remote detection), their granularity pertains
to different aspects.
Acoarse-grained approach refers to broadly applied
checks that are independent of specific control flow transfers
within the code. In the case of CFI, this involves techniques
applied based on instruction type and agnostic to individual
transfers. For instance, the following CFI policies can be
classified as coarse-grained: enforcing landing pads for
calls/returns, checking function type/parameter for indirect
calls, and restricting indirect control flow targets within the
bounds of a specific sandbox/address space. Since these
policies are generally applied to all control flow transfers
within a specific scope and do not account for the specific
details of each transfer, they are considered coarse-grained.
In CFA, a scheme is deemed coarse-grained if it does
not record all control flow transfers within the attested
application onto CFLog .
Afine-grained technique refers to mechanisms that ap-
ply a specific check or action for each control flow instruc-
tion. In the case of CFI, this entails schemes that verify each
indirect target against a unique set of valid locations, rather
than applying a broader rule based on the instruction type.
For instance, enforcement through techniques like shadow
stacks, jump-tables, or definition sets determined by data-
flow analysis are considered fine-grained solutions. A CFA
scheme is classified as fine-grained if it records all control
flow transfers within the attested application.
Sensitivity: Although closely related to granularity, the
sensitivity of a certain technique describes a different char-
acteristic. It refers to the extent to which execution context
is considered for determining the set of valid targets. In
this work, we categorize schemes as insensitive,context-
sensitive, and path-sensitive.
Techniques have insensitive enforcement if they do not
consider the calling context or current execution path when
defining the set of valid targets for a particular control flow
transfer. As such, most coarse-grained CFI are also insensi-
tive because they are applied based on the instruction type or
sandbox/address range, ignoring the current execution path
or the calling context.
Context-sensitive approaches consider the calling context
to determine the set of valid targets. Examples include
target bounds being within a particular function/sandbox.
Additionally, when a function is called at multiple locations
within a second function, a context-sensitive approach might
determine returns to any call site within the second function
as valid. For forward edges, a context-sensitive approach
allows any valid definitions that can reach the function
containing the forward edge.
Path-sensitive approaches determine the set of valid tar-
gets by considering both the calling context and the current
executing path. For instance, shadow stacks are regarded as
path-sensitive enforcement mechanisms for return addresses
because they limit a return to a single call site, rather than a
set of call sites irrespective of the current path. Furthermore,
schemes that employ data-flow analysis techniques, such
as reaching definitions or points-to analysis, to determine
the valid destinations of indirect calls are considered path-
sensitive.
In CFI, sensitivity affects the local decision on whether
a transfer constitutes a violation. In CFA, sensitivity reflects
the type of analysis/detection that can be performed by Vrf
based on the received evidence.
CFI/CFA Threat Models & Assumptions
The security of most CFI techniques depends on the
presence of added instrumentation used to enforce CFI
checks. In many cases, this is attained via W⊕X permissions
for memory accesses, as shown in Figure 3. While sensible
for user-space code, privileged code can typically disable
W⊕X enforcement. Therefore, most CFI approaches that
target privileged code (e.g., Kernel) rule out code injection
from their threat model.
CFA mechanisms require an RoT to implement their
attestation functionality, including acquisition and signing
of relevant evidence. The RoT function can also attest
the executed binary (and any instrumentation therein) as
performed by regular RA. This removes the need for W⊕X
enforcement, as long as code is attested in a temporally
consistent manner, i.e., code remains the same in the interim
between its measurement and execution. This also makes
CFA useful to verify privileged code and code that runs on
single-privilege micro-controllers.
Similar to other TEE-based security services, TEE-based
CFA (e.g., [34], [71], [84]) assumes that any applications
outside the (hardware-protected) trusted realm of the TEE
(e.g., outside the Secure World in TrustZone) can be mod-
ified/compromised whereas the RoT implementation within
the Secure World is trusted. This is typically supported by
a secure boot of the trusted code and implicitly assumes a
minimal and vulnerability-free RoT implementation. Some
CFA methods, based entirely on custom hardware [36], [72],
[85], eliminate the need to trust a software TCB within the
TEE by implementing the CFA RoT entirely in hardware.
Generally, both CFI and CFA consider underlying hard-
ware to be trusted, focusing on software-based exploits.
3. Design Space
Figure 5 illustrates distinguishing factors in CFI and
CFA, highlighting the consequences of design choices to
their effectiveness and susceptibility to attack vectors. Ta-
ble 1 presents a classification of recent work in CFI and
CFA, capturing design principles of each mechanism and
assessing their trade-offs. Aside from aspects related to Se-
curity Goals (defined in Section 2.3), the rest of this section
elaborates on design factors. Next, Section 4 discusses the
consequences of these design choices.
3.1. Different Objectives
CFI mechanisms primarily focus on locally detecting
control-flow violations during the dispatching stage to
prevent execution of exploited code from continuing, as
depicted in 4in Figure 3. Here, the wording “prevention”
is not to be confused with development stage memory
safety defenses (see Section 1.3) that attempt to eliminate
vulnerabilities altogether. In other words, CFI does not
remove root-cause vulnerabilities. Instead, it impedes certain
attack stages, increasing adversaries’ difficulty in achieving
arbitrary code execution.
Conversely, CFA is concerned with providing execution
evidence that can be verified and inspected remotely. In
this case, attack detection occurs at a relatively late stage,
but provides essential insights on attack behavior that can
be used to respond to attacks that have evaded prevention
measures. This also includes logical bugs (i.e., those not
caused by a memory vulnerability) in a program’s control
flow that CFI would not treat as an exception. In contrast,
CFI does not aim to inform or convince a remote party of
execution integrity, handling exceptions and faults locally.
When incorporated atop CFA, runtime auditing [38]
aims to reliably deliver evidence to Vrf, even when a
compromised Prv attempts not to follow the CFA protocol
(see Section 5), refusing to send reports to Vrf in an attempt
to hide the exploit behavior.
3.2. Action Mechanisms
Action mechanisms fall into (i) enforcement, (ii)
monitoring techniques, or (iii) hybrid (i.e. a combination
thereof) and can be implemented in software or hardware-
assisted. Early CFI designs relied heavily on enforcement
CFI & CFA: Design Factors
Objectives Mechanisms Execution
Environment
Effectiveness Attack Vectors
Coverage
Compatibility
Feasibility
Performance
Scalability
Pitfalls
CF Bending
Race conditions
Side-channels
Enforcement
Monitoring
Hybrid
Local Detection &
Prevention
Remote Detection
Hardware-agnostic
Extension-specific
RoT-based
Consequences
Auditing
Figure 5. Design Factors of CFI/CFA and Related Consequences
using software-based instrumentation to introduce IRMs
using generic instructions [19], [82], [86]–[95]. More recent
proposals leverage hardware extensions for specialized CFI
instructions as IRMs [21]–[23], [26]–[28], [96]–[98].
A significant limitation in the above-mentioned ap-
proaches is the lack of context sensitivity, with transfers
checked individually, making these CFI techniques bypass-
able, as demonstrated in several attacks [31], [99]–[103].
This has fueled the development of context-sensitive CFI
[81], [104]–[110]. Some proposals in this area use advanced
points-to-analysis to incorporate path/flow sensitivity to en-
force policies effectively. They also leverage commodity
hardware features to safeguard the integrity of critical vari-
ables that represent the main reference of execution history
in such policies [107], [108].
Hybrid CFI approaches use hardware features to locally
save sequences of control flow transfers for delayed check-
ing as a batch. For instance, PathArmor [104] leverages
the Intel Branch Record (LBR) registers to enable implicit
monitoring of execution paths, whereas PittyPAT [105] and
µCFI [81] mainly depend on the Intel Processor Tracing
(PT) technology [111] to explicitly monitor and protect
the execution integrity at runtime. SHERLOC [112] uses
ARM Macro Trace Buffer (MTB) and TrustZone for delayed
verification of CFI checks as a batch.
CFA monitors the execution flow recording transfers to
be reported in some form to a Vrf. C-FLAT [34] was the
earliest CFA and used software instrumentation to insert
IRMs to redirect each control flow transfer to a secure
software routine housed within TrustZone. This routine ex-
tends branch destinations into a hash-chain before resuming
the attested execution (and performing the branch). Tiny-
CFA [35] shows an instrumentation approach to achieve
CFA atop the lightweight Proof of Execution (PoX) archi-
tecture APEX [113] reducing hardware costs. Additionally,
the work of Papamartzivanos et al. [114] utilizes Intel PT
technology [111] for generating the runtime traces. LO-
FAT [36] and ATRIUM [85] eliminate instrumentation re-
quirements from C-FLAT by implementing custom hard-
ware modules to detect control flow transfers and extend the
hash-chain. While these early approaches produce evidence
that minimizes storage/transmission costs (to the size of one
hash digest), they result in loss of information, requiring
Vrf to use the received hash digest to derive the exact
control flow path for inspection. The complexity of this
task grows exponentially leading to the well-known path
explosion problem [115], [116].
To ease verification and inspection of CFA evidence,
more recent techniques [37], [38], [71], [72], [84], [117]
generate CFLog as a lossless trace containing all relevant
control flow evidence. For instance, OAT [71], ARI [37],
and ISC-FLAT [84] leverage TEEs to securely update and
store the runtime evidence. LiteHAX [72] and ACFA [38]
utilize custom hardware for recording a verbatim trace.
3.3. Execution Environment
Requirements in terms of the execution environments
for CFI/CFA can be distinguished as: hardware-agnostic,
extension-specific, and RoT-based.
CFI focuses mainly on the first two types. Hardware-
agnostic CFI mechanisms, e.g., LLVM-CFI [19] and Mi-
crosoft Control Flow Guard (MS-CFG) [118] can cover a
variety of targets. However, portability can come at the price
of performance and security guarantees. Hence, several CFI
approaches use specific architectural (hardware) extensions
in their local environments. Some involve custom hardware
extensions specifically designed to support CFI, while others
are repurposed from their other goals and integrated as
building blocks into CFI. Examples of the former include
Intel CET [22] and ARM Pointer Authentication [21], along
with the body of work built upon them [23], [24], [26], [28],
[97], [98]. Examples of the latter category include CFI using
Intel PT [81] and ARM Trace Macrocell (TMC) [119].
As discussed in Section 2.3, CFA necessitates RoT
support to generate (and sign) remotely verifiable evidence.
RoTs in current CFA are implemented via TEEs [34], [71],
[84] or custom hardware changes on Prv [35], [36], [38],
[72], [85], [120]. Current CFA proposals aimed at user-
space programs [117], [121] either trust the operating system
kernel (the code integrity of which can be verified using
static RA as supported by commodity TPMs) or rely on
enclaved execution TEEs [122].
4. Effects & Consequences
This section discusses effects and consequences of de-
sign choices discussed in Section 3.
4.1. Effectiveness
4.1.1. Coverage. In terms of coverage, CFI mechanisms
offer varying degrees of protection, ranging from protect-
ing all edges, i.e., all types of indirect control flow al-
tering instructions [15], [86]–[89], [105], [123], [124], to
partial coverage, targeting either forward-edges [19], [23],
[96], [98], [107], [108], [110], [118], [125] or backward-
edges [21], [22], [28], [97], [109], [126]–[128]. Forward-
edge [10] schemes and employs software-based IRMs [19],
[118], [125], hardware-assisted monitoring [81], [105], land-
ing pads [23], [96], and pointer authentication [27], [90],
[98]. Backward edge [9] schemes utilize software- [127],
[129] or hardware-based [21], [22] shadow stacks, and
architecture-specific features, such as branch history tables
in the x86 processors [130], [131], static rewriting, e.g., to
form jump tables [109], or pointer authentication [28], [97].
Additionally, CFI designs can offer partial protection/-
coverage in specific scenarios. For example, some designs
concentrate on protecting C-like applications [125] while
leaving out relevant structures specific to C++, such as
vtables, and vice-versa [19]. Conversely, other designs focus
on statically linked applications [15], [86], [87], [90], [109],
overlooking dynamic linking and associated concerns, such
as protecting Procedure Linkage Tables (PLT) and Global
Offset Tables (GOT) [89], [132].
The majority of CFI mechanisms (regardless of their
coverage) is context insensitive [15], [19], [23], [86], [87],
[89], [90], [118], [124], [133]. This can introduce gaps that
are challenging to detect [64]. Moreover, it limits the ability
to detect non-control-data attacks [11].
CFA schemes can enhance coverage and expand (re-
mote) detection capabilities beyond traditional control-flow
hijacking attacks. As CFA evidence includes the executed
control flow path, it (in principle) informs Vrf of any out-of-
order execution, including DOP attacks which are oblivious
to most CFI. Some approaches [71]–[73] include data inputs
within CFA, augmenting produced evidence to also make
DDMs observable. It is important to note, however, that CFA
evidence is only truly useful if Vrf can effectively analyze
it. This last aspect has been, for the most part, overlooked
in the current literature. We revisit this point in Section 5.2.
4.1.2. Compatibility. Compatibility is a fundamental aspect
to consider when evaluating the effectiveness of CFI and
CFA mechanisms and manifests in various forms.
Binary Support. Despite the abundance of CFI mech-
anisms, few operate directly on binary using static bi-
nary analysis [86], [87], [93], [94] or specific hardware
extensions [92], [105], [124], [126], [133]–[135]. While
these have broader applicability, they suffer from false
positives [101] typically employing ad-hoc approaches to
recover CFGs or simply marking all address-taken functions
and call-site preceded instructions as legitimate targets for
indirect branches [18]. Conversely, CFI based on source
code [19], [27], [81], [89], [98], [104], [107], [108] con-
structs more accurate CFGs. Access to source code typi-
cally allows for advanced static analysis techniques (e.g.,
points-to analysis as seen in µCFI [81] and multi-layer
type analysis as seen in MLTA [83]), increasing coverage
and precision. However, source-level schemes do not apply
to commercial off-the-shelf (COTS) software, where only
binary images are available [16].
CFA designs typically do not require source code knowl-
edge to generate control flow evidence [34], whereas source
code knowledge may assist Vrf in analyzing received ev-
idence (see Section 5.2). Hardware-based CFA inherently
supports binaries [36], [38], [72], [85] by integrating with
the CPU core and detecting branch instructions at runtime.
CFA relying on instrumentation [34], [35], [71], [84], [136]
can instrument control flow transfers in the binary without
knowing the source. This is because required instrumenta-
tion simply used only to log destination addresses, rather
than determining/enforcing policies in place. Exceptions to
this include schemes mixing evidence generation with local
integrity checks, e.g., [120].
Modular/Shared Object Support. A limitation of many
CFI mechanisms is the lack of support for external modules
or dynamic shared objects (DSO). These mechanisms often
rely on global information that may not always be available,
making it challenging to implement globally compatible
CFI. Abstractly speaking, support for external/shared mod-
ules involves (i) integrating multiple modules hardened by
CFI separately and (ii) integrating CFI-protected modulfes
with unprotected legacy code. Binary solutions such as
CCFIR [87] attempt to address these issues by allowing
more targets than necessary, striking a security-compatibility
compromise. Although approaches such as MCFI [89] and
RockJIT [137] tackle case (i) by independently instrument-
ing each module and generating new CFGs when modules
are linked, recent CFI solutions that offer stronger security
guarantees, exemplified by µCFI [81] and OS-CFI [107],
do not provide modular support. Even contemporary solu-
tions employing hardware features, e.g., PACStack [28] and
PACTight [98], struggle to address both issues (i) and (ii).
While not explicitly discussed in prior work, the lack
of modular support in CFA can be attributed to (i) most
CFA proposals being aimed at simple embedded systems
(as seen in Table 1) where applications are statically linked
within a single module; and (ii) DSO support would have
implications on Vrf evidence analysis, requiring careful
consideration.
Hardware Dependence. Hardware-specific features can
enhance CFI and CFA. However, they limit a scheme’s
compatibility to architectures that support them and intro-
duce challenges for legacy systems. For instance, CFI like
PittyPAT [105], GRIFFIN [124], µCFI [81], and PathAr-
mor [104] utilize Intel PT and LBR to obtain runtime infor-
mation and compute a smaller set of legitimate targets, strik-
ing a balance between accuracy and performance overhead.
Similarly, OS-CFI [107], and CFI-LB [108] leverage Intel
TSX (Transactional Synchronization Extensions) and MPX
(Memory Protection Extensions) to safeguard instrumented
code and metadata against malicious tampering. Approaches
such as HCFI [123] propose custom hardware modifications.
TEE-based CFA schemes demonstrate how instrumen-
tation can be used alongside RoT hardware support (e.g.,
ARM TrustZone [77], Intel MPK [138], or PoX architec-
ture [35]) to implement CFA. Early hardware-based CFA,
such as LO-FAT [36] and ATRIUM [85], add custom branch
monitors and hash engines to detect and accumulate control
flow transfers as a hash digest. LiteHAX [72] opts for
more expressive evidence, using dedicated hardware to log
and store all control flow transfers, aiming at easing Vrf
subsequent analysis. ACFA [38] uses custom hardware for
branch detection while eliminating the cost of hash en-
gines to make instrumentation-less CFA feasible in budget-
constrained micro-controllers. Instead, it incorporates com-
ponents of a static RA architecture (VRASED [67]) and
an active RoT (GAROTA [139]). The former is used to
authenticate CFA evidence, while the latter is leveraged to
ensure reliable delivery of evidence to Vrf (enabling auditing
guarantees).
Functionality. Recent evaluations of various CFI de-
fenses have highlighted compatibility issues that can com-
promise the intended functionality of the target applica-
tion [64], [140]. Notably, the implementation approach in
Lockdown [93] and OS-CFI [107] fails to correctly compile
certain applications, e.g., nginx. Moreover, CFI mechanisms
such as OS-CFI [107] and CFI-LB [108] have been found
to generate false positives. Additionally, the analysis mech-
anism of LLVM-CFI [19] is incompatible with at least one
application in the SPEC CPU2006 suite, as reported in [23].
CFI mechanisms that depend on reserving registers, e.g.,
VIP [110], can corrupt functionality when targeting appli-
cations with inline assembly that utilize the same registers.
CFA mechanisms that instrument binaries may also en-
counter compilation failures due to instrumentation issues,
as observed in ReCFA with specific benchmarks [121]. As
CFA is a newer concept, fewer studies exist on analyzing
CFA instrumentation compatibility. At least in principle,
issues presented in CFI schemes (e.g., similar to VIP-CFI,
TinyCFA employs a reserved register) could also apply to
CFA, depending on the instrumentation strategy used.
4.1.3. Feasibility. CFI approaches are to a large extent fea-
sible, despite inherent uncertainties around the robustness of
policies due to granularity and context sensitivity (discussed
further in Section 4.2).
As with any attestation mechanism, CFA requires a
secure RoT on Prv to maintain and authenticate evidence,
as discussed in Section 3.2. It also requires communication
with an external Vrf. Naturally, custom hardware features
(such as branch monitors) improve feasibility and reduce the
cost of CFA. Being a relatively recent concept, we expect
hardware features to support CFA to take longer to reach
off-the-shelf devices.
4.1.4. Performance & Scalability. When comparing the
scalability of CFI and CFA, it becomes evident that CFI
generally encounters fewer or no scalability challenges due
to their localized nature. Since the scope of CFI is confined
to local decisions based on control flow policies, scalability
issues revolve around code size and runtime of the individ-
ual applications being protected. A study dedicated to CFI
performance can be found in [16].
In contrast, CFA faces further scalability challenges in
storing and transmitting runtime evidence. Schemes such as
ScaRR [117] and ACFA [38], which continuously report
evidence to Vrf, may face challenges when attempting to
cover multiple active applications on the same Prv. This
may impact availability, particularly in scenarios where
network communication is essential (e.g., cloud). In other
schemes, Prv may need to store a large CFLog is attested
operations are complex. This can limit applicability to small
and self-contained operations [35], [71].
4.2. Attack Vectors
In this section, we explore how gaps or design choices
(typically aimed at trading off performance for security) in
CFI and CFA can lead to attack vectors.
4.2.1. Pitfalls. Attacks can exploit various well-known pit-
falls or limitations, including, but not limited to:
Granularity: many past attacks have exposed the inef-
fectiveness of coarse-grained CFI defenses for both forward
and backward edges [99]–[101].
Implementation issues: the implementation of defenses
may deviate from their design specifications, leading to a
larger number of allowed branch targets than necessary. For
example, [64] highlighted implementation mistakes in mul-
tiple CFI defenses, including MCFI [89] and PARTS [26].
Imprecise consideration of language semantics: At-
tacks such as COOP [141] have affected T-VIP [142] and
VTint [143] due to inadequate incorporation of language-
specific semantics.
Hardware design limitations: Certain attacks have
specifically targeted the hardware design of CFI mecha-
nisms. For instance, the attack on HAFIX [126] highlighted
vulnerabilities stemming from hardware limitations [144].
Exploitation of Assumptions: defenses always rely on
assumptions within their threat models. Thus, attacks can
exploit and falsify these trust assumptions to bypass the de-
fense mechanisms. For instance, a widespread CFI assump-
tion is W⊕X. The POP attack [65] serves as an example
where this assumption was violated to bypass FineIBT [23]
defense on the Linux kernel v6.2.8.
Corner Cases: Certain attacks exploit exceptional cases.
For instance, the Control Jujutsu attack [145] highlighted
the limitations of fine-grained CFI defenses with activated
shadow stacks in complex code bases like Apache and
nginx. Due to coding practices in these code bases, context-
insensitive analysis regardless of its intended robustness
creates over-approximated CFGs that render CFI ineffective.
Another example is CHOP [146], which further undermines
robust backward edge protection mechanisms, including
hardware-based shadow stack implementations [22]. It lever-
ages a specific corner case that enables manipulation of the
stack unwinding path during exception handling to launch
ROP-like attacks, using the unwinder as a confused deputy.
CFA has not yet been extensively evaluated: coarser-
grained CFA [37] (or those based on attesting Prv adherence
to locally enforced CFI policies [120]) may be subject to
the same attack vectors as coarse-grained/context-insensitive
CFI, where certain attacks would not appear in the CFA
evidence. Yet, CFA that monitors all indirect branches can
withstand language semantic issues, enabling detection of
attacks such as COOP [141]. Additionally, CFA can also
provide evidence of logic implementation bugs that lead to
unintended paths, in addition to attacks rooted in memory
safety vulnerabilities. Naturally, the expressiveness of CFA
evidence (i.e., whether it gives Vrf full path evidence or a
subset) comes at the price of its (lossless) storage and trans-
mission. Unsurprisingly, implementation deviations (from
intended specifications) and falsifiable assumptions would
equally affect CFA and CFI.
4.2.2. Control Flow Bending (CFB). CFB attacks [20]
generalize non-control data attacks targeting CFI schemes
relying on statically generated CFGs. While many CFI at-
tacks targeted weaker or sub-optimal implementations [99]–
[101], CFB focuses on bypassing the most restrictive (or
optimal) static CFI policies. CFB creates malicious (Turing-
complete) paths that exist on the most strict CFG for a given
program by exploiting specific functions, called dispatchers,
which have the capability to modify their own return ad-
dresses. In other words, CFB can arbitrarily modify (bend)
a program’s behavior/path while staying within the confines
of the imposed security policy.
This highlights that even fine-grained CFI can be by-
passed if dynamic backward protection is not implemented
(e.g., via a secure shadow stack). To mitigate CFB, cer-
tain CFI proposals incorporate dynamic analysis [82] or
leverage hardware features that provide runtime information
on execution status [81]. Additionally, context-sensitive CFI
schemes have the potential to reduce the impact of CFB by
maintaining an execution history and validating the execu-
tion of return instructions accordingly [107], [108].
Most CFA approaches log all dynamically defined
branch targets within their execution scope. Thus, CFB
path deviations appear in generated evidence, making CFB
attacks apparent to Vrf. That said, (similar to cases discussed
above) the effectiveness Vrf in detecting CFB based on CFA
evidence remains to be concretely evaluated.
4.2.3. Race Conditions. Many CFI methods overlook
thread safety in multi-threaded applications. This can leave
them vulnerable to Time-Of-Check-to-Time-Of-Use (TOC-
TOU) attacks. Software-based approaches such as LLVM-
CFI [19] face challenges in ensuring thread safety, especially
in the presence of blind compiler optimizations that can
inadvertently expose sensitive variables used for security
checks. This can create race conditions that enable TOC-
TOU attacks [152].
Additionally, WarpAttack [153] revealed that compiler
optimizations can introduce double-fetch vulnerabilities, re-
sulting in concurrency issues and TOCTOU, even with
a strict static CFI policy that includes both forward and
TABLE 1. CATEGORIZATION OF CFI AND CFA SCH EM ES ,HIGHLIGHTING THEIR MAIN PROPERTIES AND REQUIREMENTS.
Device Type/Target Scope Overheads
Year Scheme
Embedded (bare-metal)
Embedded (OS)
High-end (User-space)
High-end (Kernel)
Mechanism Sensitivity System Support
Data-Only
ROP
JOP
Evidence Expressiveness
Runtime
Code Size
Custom Hardware
Network
Control Flow Integrity (CFI) Approaches
2013 bin-CFI [86] ✗ ✗ ✓✗SWI ✗OS/MMU ✗# # -●●✗-
2013 CCFIR [87] ✗ ✗ ✓✗SWI+R/I ✗OS/MMU ✗# # -●●✗-
2014 LLVM CFI [19] ✗ ✗ ✓ ✓ SWI ✗OS/MMU ✗ ✗ G# -●●✗-
2014 KCoFI [88] ✗ ✗ ✗ ✓SWI ✗MMU ✗G# G# -● ● ✗-
2014 MCFI [89] ✗ ✗ ✓✗SWI CS OS/MMU ✗G# G# -● ● ✗-
2014 RockJIT [137] ✗ ✗ ✓✗SWI CS OS/MMU ✗G# G# -●●✗-
2015 CCFI [90] ✗ ✗ ✓✗SWI ✗OS/MMU ✗ -●●✗-
2015 HAFIX [126] ✓✗ ✗ ✗ ISA Path C-HW ✗ ✗-● ● ●-
2015 CFCI [91] ✗ ✗ ✓✗SWI ✗OS/MMU ✗# # -●●✗-
2015 O-CFI [92] ✗ ✗ ✓✗SWI+R/I ✗OS/MMU+MPX ✗G# G# -●●✗-
2015 πCFI [82] ✗ ✗ ✓✗SWI ✗OS/MMU ✗G# G# -●●✗-
2015 PathArmor [104] ✗ ✗ ✓✗SWI Path OS/MMU+LBR ✗ -●●✗-
2015 Lockdown [93] ✗ ✗ ✓✗SWI+R/I ✗OS/MMU ✗ -● ● ✗-
2016 TypeArmor [94] ✗ ✗ ✓✗SWI ✗OS/MMU ✗ ✗ G# -●●✗-
2016 FG-CFI [95] ✗ ✗ ✗ ✓SWI ✗MMU ✗G# G# -● ● ✗-
2016 HCFI [123] ✗ ✗ ✓✗ISA ✗OS+C-HW ✗ -● ● ●-
2017 PittyPAT [105] ✗ ✗ ✓✗SWI Path OS/MMU+PT ✗ -●●✗-
2017 GRIFFIN [124] ✗ ✗ ✓✗SWI ✗OS/MMU+PT+TSX ✗ -●●✗-
2017 CFI-CaRE [133] ✓✗ ✗ ✗ SWI+R/I ✗TZ ✗ G# -● ● ✗-
2017 Intel CET [22] ✗ ✗ ✓ ✓ ISA ✗OS/MMU+CET ✗ # -●●✗-
2018 µCFI [81] ✗ ✗ ✓✗SWI ✗OS/MMU+PT ✗ -●●✗-
2018 SCFP [106] ✓✗ ✗ ✗ SWI+C-HW Path C-HW ✗ G# -●●●-
2018 ARM BTI [96] ✓ ✓ ✓ ✓ ISA ✗BTI ✗ ✗ #-●●✗-
2018 PAC-RET [97] ✓ ✓ ✓ ✓ ISA ✗PA ✗ ✗-● ● ✗ -
2019 OS-CFI [107] ✗ ✗ ✓✗SWI+R/I Path OS/MMU+MPX+TSX ✗ ✗ -● ● ✗-
2019 CFI-LB [108] ✗ ✗ ✓✗SWI+R/I CS OS/MMU+TSX ✗ ✗ -●●✗-
2019 PARTS [26] ✗ ✗ ✓✗SWI+ISA ✗OS/MMU+PA + -● ● ✗-
2020 µRAI [109] ✓✗ ✗ ✗ SWI+R/I Path MPU ✗ ✗-●●✗-
2020 Silhouette [127] ✓✗ ✗ ✗ SWI+R/I ✗MPU ✗ ✗-● ● ✗ -
2021 VIP [110] ✗ ✗ ✓✗SWI+R/I Path OS/MMU+MPK +✗ -● ● ✗-
2021 PACStack [28] ✗ ✗ ✓✗SWI+ISA ✗OS/MMU+PA ✗ ✗-● ● ✗ -
2022 TyPro [125] ✗ ✗ ✓✗SWI ✗OS/MMU ✗ ✗ -● ● ✗ -
2022 PAL [27] ✗ ✗ ✗ ✓SWI+ISA ✗PA+MMU ✗ -● ● ✗ -
2022 PACTight [98] ✗ ✗ ✓✗SWI+ISA ✗OS/MMU+PA ✗ -●●✗-
2023 FineIBT [23] ✗ ✗ ✓ ✓ SWI+ISA ✗CET+MMU ✗ ✗ G# -●●✗-
2023 SHERLOC [112] ✓✗ ✗ ✗ SWI Path TZ+MTB+DWT ✗ G# -●●✗-
2023 TypeSqueezer [147] ✗ ✗ ✓✗SWI Path OS/MMU ✗ ✗ -● ● ✗-
2024 HEK-CFI [148] ✗ ✗ ✗ ✓ISA ✗CET+MMU ✗ G# -● ● ✗ -
Control Flow Attestation (CFA) Approaches
2016 C-FLAT [34] ✓✗ ✗ ✗ SWI Vrf -based TZ ◻ △● ● ✗☆
2017 LO-FAT [36] ✓✗ ✗ ✗ C-HW Vrf -based C-HW ◻ △✗ ✗ ●☆
2017 ATRIUM [85] ✓✗ ✗ ✗ C-HW Vrf-based C-HW ◻ △✗ ✗ ●☆
2018 LiteHAX [72] ✓✗ ✗ ✗ C-HW Vrf-based C-HW ⊞ ▲✗ ✗ ●☀
2019 DIAT [149] ✓ ✓ ✗ ✗ SWI Vrf-based TZ ◻ △● ● ✗☆
2019 ScaRR [117] ✗ ✗ ✓✗SWI Vrf -based OS/MMU ◻ ▲● ● ✗☀
2019 RIM [120] ✓✗ ✗ ✗ C-HW Path C-HW ⊞ △✗ ✗ ?☆
2020 OAT [71] ✓ ✓ ✗ ✗ SWI Vrf-based TZ ⊞ ● ● ✗☆
2020 LAHEL [150] ✓ ✓ ✗ ✗ C-HW Vrf -based C-HW ◻G# G# ● ● debug HW ☆
2020 LAPE [151] ✓✗ ✗ ✗ SWI+R/I Vrf-based MPU ◻G# G# △● ● ✗☆
2021 Tiny-CFA [35] ✓✗ ✗ ✗ SWI Vrf -based C-HW ◻ ▲● ● ●☆
2021 DIALED [73] ✓✗ ✗ ✗ SWI Vrf -based C-HW ⊞ ▲● ● ●☆
2021 ReCFA [121] ✗ ✗ ✓✗SWI+R/I Vrf-based OS+MPK ◻ ▲● ● ✗☆
2022 GuaranTEE [122] ✗ ✗ ✓✗SWI Vrf -based Intel SGX ◻ △● ● ✗☆
2023 ACFA [38] ✓✗ ✗ ✗ C-HW Vrf -based C-HW ◻ ▲✗ ✗ ●☀
2023 ARI [37] ✓✗ ✗ ✗ SWI Vrf -based TZ ◻G# G# ● ● ✗☆
2023 BLAST [136] ✓ ✓ ✗ ✗ SWI Vrf -based TZ ◻ ▲● ● ✗☆
2023 ISC-FLAT [84] ✓✗ ✗ ✗ SWI Vrf -based TZ ◻ ▲● ● ✗☆
Legend: ✓Has this feature, ✗Lacks this feature, - Feature is not applicable, SWI: Software Instrumentation, R/I: Randomization or Isolation, ISA: Instruction Set
Architecture, MMU: Memory Management Unit, C-HW: Custom Hardware, OS: Operating System, CS: Context Sensitive, Path: Context- & path-sensitive MPX: Intel
Memory Protection eXtensions LBR: Intel Last Branch Record PT: Intel Processor Trace TSX: Intel Transactional Synchronization Extension TZ: ARM TrustZone,
CET: Intel Control-Flow Enforcement Tech., BTI: ARM Branch Target Identification, PA: ARM Pointer Authentication, MPU: Memory Protection Unit, MPK: Intel
Memory Protection keys, MTB: ARM Macro Trace Buffer, DWT: ARM Data Warchpoint and Trace, ●Higher overhead,
?Feature required but cost not reported, debug HW refers to reliance on prototyping/debug features not meant for device deployment.
backward-edge protections. WarpAttack bypassed several
CFI defenses, including LLVM-CFI [19], Lockdown [93],
and MS-CFG [118]. To mitigate race conditions, contempo-
rary CFI mechanisms rely on hardware support. For exam-
ple, OS-CFI [107] and CFI-LB [108] utilize Intel TSX to
safeguard intermediate values.
In CFA (and more broadly RA), resistance against TOC-
TOU attacks and race conditions often refers to achieving
temporal consistency between when the executable binary
is measured and when it is executed [35], [38], [84], [154]–
[156]. Aside from modifications to code, the integrity of
CFA evidence can be compromised by external interrupts
that may stealthily modify the control flow path or the
execution state, as shown and mitigated by ISC-FLAT [84].
4.2.4. Side channels. The emergence of microarchitectural
attacks can affect CFI and CFA. While these defenses focus
on memory corruption attacks, certain variants of Spec-
tre [157] can affect them. For instance, Spectre v1 exploits
misspeculation following a bounds-check prior to an array
access, while Spectre v2 exploits misprediction of the target
of an indirect call or jump. Both utilize a Flush+Reload
channel [158] to leak data. Research has demonstrated that
Spectre v1-like attacks can bypass software-based CFI de-
fenses, such as LLVM-CFI [19], even in the presence of
all default mitigations [159]. While specialized mitigations,
such as SPE CCFI [160] and MicroCFI [161], were proposed,
Spectre v2 remains severe and yet to be fully mitigated.
Although contemporary CFI defenses, such as Intel
CET [22], consider a post-Spectre threat model and are
designed with built-in protection against Spectre v2 [162],
recent attacks, such as InSpectre Gadget [163], have uncov-
ered new types of exploitable gadgets that can successfully
mount Spectre v2 attacks, even if the CET’s Indirect Branch
Tracking (IBT) feature or its recent fine-grained counter-
part, FineIBT [23], are active. PACMAN [31] stands out
as another recent attack that exploits speculative execution
along with memory corruption to bypass ARM Pointer
Authentication on Apple M1 SoCs.
Similar to CFI, CFA leveraging architectural components
vulnerable to side channels could be equally vulnerable. On
the other hand, several secret dependency-related time side
channels [164] (that exploit software implementation bugs,
rather than micro-architectural bugs) depend on differences
in the target program’s control flow path, opening opportu-
nities for exploit identification based on CFLog analysis. To
our knowledge, the latter remains unexplored in prior work.
5. Takeaways and Paths Forward
We conclude this paper synthesizing insights from dis-
cussions presented in Section 3, Section 4, and Table 1.
Based on these insights, we revisit questions [Q1-Q4] from
Section 1.
5.1. Takeways
Considering question [Q1], posed in Section 1, our sys-
tematization presents several differences between CFI and
CFA. The effectiveness of CFI mechanisms is intrinsically
tied to the comprehensiveness and accuracy of a (statically-
defined or dynamic) policy enforced locally. Most CFA tech-
niques are policy-agnostic, passively monitoring execution
to generate authenticated control flow reports. Contrary to
CFI, CFA concerns convincing a remote party of trustworthy
execution behavior, serving as a run-time analog to static
attestation methods that prove the integrity of booted/loaded
code. Thus, CFA reports are transmitted to a remote Vrf for
analysis. These observations lead us to Takeaway 1.
Takeaway 1: CFI and CFA have different goals
CFI focuses on local detection of control-flow vio-
lations/hijacking. CFA provides remote evidence of
execution behavior irrespective of underlying policy
enforcement.
Regarding [Q2], we first examine CFA/CFI assumptions.
Many CFI schemes assume the ability to apply W⊕X on
memory to preserve the software instrumentation (SWI) and
avoid code injection. In Table 1, this is apparent from user-
space CFI schemes frequently relying on OS/MMU system
support to enforce the W⊕X policy. While CFA mechanisms
need not impose W⊕X, they must rely on an attestation
RoT in Prv to attest that reported runtime evidence is
authentic (this includes code integrity and instrumentation,
when applicable). Furthermore, unlike CFI, CFA requires
network connectivity between Vrf and Prv. Despite these
differences, we also observe that state-of-the-art techniques
for CFI and CFA intersect in their mechanisms for monitor-
ing control flow events. For instance, many schemes utilize
SWI as a mechanism while relying on hardware (whether
commodity or custom) to protect or support SWI, as shown
in Table 1. Table 1 also shows that both CFI and CFA can
eliminate some SWI using ISA-specific or custom hardware
extensions. This is summarized in Takeaway 2.
Takeaway 2: Design intersections & differences
Although CFI and CFA schemes share many com-
monalities in their strategies (as apparent in the
Mechanism column of Table 1), they also have
distinct requirements for their system models, e.g.,
as seen in the System Support and Network
Overhead columns of Table 1.
A common misconception/over-simplification that re-
lates to [Q3] is that CFA’s entire purpose is to enable CFI
checks to be outsourced to a resource-rich Vrf, avoiding
CFI costs on Prv. As extensively discussed in this system-
atization, CFA goals go beyond outsourcing CFI checks.
As evidence of that, recent CFA methods have evolved
to generate expressive (often lossless) control flow path
evidence, as opposed to proving adherence to a locally
enforced CFI policy. See Evidence Expressiveness
column, in Table 1. This is subsumed by Takeaway 3.
Takeaway 3: CFA goes beyond outsourced CFI
CFI is clearly the best choice for local detection of
run-time attacks. CFA enables remote (and offline)
control flow path analysis, giving remote visibility to
complex path deviations (e.g., DOP and CFB) that
would often be oblivious to most practical CFI –
see Scope column in Table 1. CFA evidence also
makes logic control path bugs (other than memory
corruption) observable. Finally, it facilitates auditing
and root cause analysis, if the evidence is reliably
delivered to Vrf. On the other hand, remote ob-
servability in CFA comes at the cost of supporting
communication and a secure attestation RoT imple-
mentaiton.
Regarding [Q4], given their distinct security goals, the
coexistence of CFI and CFA on the same platform could
be possible if the performance overhead is acceptable in the
target domain. We believe exploring hybrid approaches that
combine the strengths of CFI and CFA to be an intriguing
avenue for further research. A potential hybrid design might
include CFI building blocks that can be elegantly incor-
porated into CFA reports. Considering that many state-of-
the-art CFI offer fine-grained local ROP detection with low
overheads (as seen in Scope and Overheads columns in
Table 1), a hybrid approach might implement CFI techniques
for local ROP detection while utilizing CFA techniques for
generating expressive evidence of path deviations due to
JOP, DOP, and/or logic control bugs. However, CFI/CFA
integration is not trivial, as differences in designs and system
assumptions should be considered and can contribute to
overheads. These observations are summarized in Takeaway
4.
Takeaway 4: Coexistence merits investigation
Given the trade-offs between CFI and CFA, a hybrid
approach could offer both local responses to simpler
runtime attacks and remote visibility to complex
attacks and their root causes. On the other hand,
overheads of both approaches on the same platform
could challenge practical adoption.
5.2. Paths Foward
Demand for Stronger Threat Models. Currently con-
sidered threat models (in both CFI and CFA) can be limited
in scope or may not adequately address the challenges posed
by sophisticated adversaries (e.g., those capable of launching
side-channel attacks). Next-generation mechanisms should
consider stronger threat models to encompass new attack
vectors that can lead to control-flow violations.
CFA Support for Complex Software. The current
landscape of CFA mechanisms primarily focuses on address-
ing the needs of simple, specialized, bare-metal embedded
software (see column Device Type/Target in Table 1).
However, this limited scope poses challenges when it comes
to applying these mechanisms to complex software scenarios
with wider attack surfaces. To overcome this limitation, it
is crucial to develop CFA mechanisms specifically tailored
for complex software. Handling such a challenge not only
helps align with zero-trust principles in demanding domains
like cloud scenarios but also provides valuable insights into
the effectiveness of CFA mechanisms compared to CFI
mechanisms.
CFA Evidence Verification & Practicality. The major-
ity of CFA literature focuses on Prv, assuming a Vrf able
to interpret received evidence to detect attacks and identify
root causes as long as the evidence is sufficiently expressive.
Alas, there is a significant lack of concrete Vrf instances to
substantiate postulated evidence analysis capabilities. Most
of the CFA literature either leaves Vrf implementation as
future work or implements simple remote checks based on
received evidence, e.g., adherence to a CFG or emulated
shadow stack (both of which could also be done locally
by several CFI methods). Only two studies have explored
alternative approaches. ZEKRA [165] suggests generating a
zero-knowledge proof of CFG adherence for an untrusted
Vrf, while RAGE [166] proposes training a Graph Neural
Network (GNN) on previous runtime evidence for path
verification. Yet, no prior work systematically demonstrates
postulated benefits of CFA in terms of uncovering complex
attacks (and root causes) based on remotely analyzed evi-
dence.
Additionally, striking a balance between evidence ex-
pressiveness and overhead poses a challenge in achiev-
ing full-fledged CFA. Hashed paths compromise detailed
runtime evidence in exchange for reduced storage and
transmission costs. However, lossless path representations
(and associated transmission to Vrf) remain costly. As the
complexity of the applications increases, the importance of
expressiveness/cost trade-offs becomes more pronounced.
Within this realm, promising avenues for future work in-
clude the development of mechanisms to reduce evidence
storage and transmission costs while maintaining relevance
and expressiveness.
CFI in Real-Time Systems and Other Niche. To
the best of our knowledge, the majority of existing CFI
mechanisms are not well-suited for real-time systems, high-
lighting the need for innovative approaches in this domain.
Proposing effective CFI mechanisms for real-time systems
entails addressing two critical challenges that are vital for
ensuring both functional correctness and system safety.
The first challenge stems from the strict timing require-
ments inherent in real-time systems, particularly regarding
intra-task timing. It is imperative to design CFI mechanisms
that can operate within these constraints without compromis-
ing system performance. A potential avenue to tackle this
challenge is the use of hardware-assisted branch monitors
that reduce/eliminate code instrumentation. By parallelizing
branch checks, hardware-based CFI could provide integrity
without impacting intra-application delays.
The second challenge involves rethinking the conven-
tional approach of terminating an exploited application upon
detecting a CFI violation, especially in domains such as au-
tonomous systems. Abruptly terminating an application can
introduce system instability or disruptions, posing risks to
critical operations. Alternative strategies could be explored
to recover from CFI violations to ensure system safety
without unintended consequences. A promising direction to
address this challenge is to design CFI schemes accommo-
dating multi-variant execution that allows the containment
of exploited applications while enabling the continuation of
critical tasks.
We note that devising CFI that tackles both aforemen-
tioned challenges is non-trivial and requires more careful
consideration. This includes accounting for factors such as
portability, adaptability, and scalability.
References
[1] TIOBE, “TIOBE Index for May 2023 of Programming Languages,”
https://www.tiobe.com/tiobe-index/, 2023, [Online; accessed 13-
May-2023].
[2] Microsoft, “Data execution prevention,” https://learn.microsoft.com/
en-us/windows/win32/memory/data-execution-prevention, 2022,
[Online; accessed 13-February-2023].
[3] H. L¨
ohr et al., “Patterns for secure boot and secure storage in
computer systems,” in ARES. IEEE, 2010.
[4] W. A. Arbaugh et al., “A secure and reliable bootstrap architecture,”
in S&P. IEEE, 1997.
[5] Trusted Computing Group, “TPM 2.0 Library Specification,” https:
//trustedcomputinggroup.org/resource/tpm-library-specification/,
2024, [Online; accessed 13-May-2024].
[6] LWN.net, “The Integrity Measurement Architecture,” https://lwn.
net/Articles/137306/, 2005, [Online; accessed 13-May-2024].
[7] M. Bires, “Upgrading Android Attestation: Remote Provisioning,”
https://android-developers.googleblog.com/2022/03/upgrading-
android-attestation- remote.html, 2022, [Online; accessed 13-May-
2024].
[8] P. Larsen et al.,The Continuing Arms Race: Code-Reuse Attacks
and Defenses. Association for Computing Machinery and Morgan
& Claypool, 2018.
[9] R. Roemer et al., “Return-oriented programming: Systems, lan-
guages, and applications,” ACM Transactions on Information and
System Security (TISSEC), 2012.
[10] T. Bletsch et al., “Jump-oriented programming: a new class of code-
reuse attack,” in CCS, 2011.
[11] S. Chen et al., “Non-control-data attacks are realistic threats.” in
USENIX Security, 2005.
[12] H. Hu et al., “Data-oriented programming: On the expressiveness
of non-control data attacks,” in S&P. IEEE, 2016.
[13] M. Payer, “Control-flow hijacking: Are we making progress?” in
AsiaCCS, 2017.
[14] L. Szekeres et al., “Sok: Eternal war in memory,” in S&P. IEEE,
2013.
[15] M. Abadi et al., “Control-flow integrity principles, implementations,
and applications,” ACM Transactions on Information and System
Security (TISSEC), 2009.
[16] N. Burow et al., “Control-flow integrity: Precision, security, and
performance,” ACM Computing Surveys (CSUR), 2017.
[17] R. De Clercq et al., “A survey of hardware-based control flow
integrity (cfi),” arXiv preprint arXiv:1706.07257, 2017.
[18] X. Xu et al., “Confirm: Evaluating compatibility and relevance of
control-flow integrity protections for modern software.” in USENIX
Security, 2019.
[19] C. Tice et al., “Enforcing forward-edge control-flow integrity in
GCC & LLVM,” in USENIX Security, 2014.
[20] N. Carlini et al., “Control-flow bending: On the effectiveness of
control-flow integrity,” in USENIX Security, 2015.
[21] ARM, “Learn the architecture - Providing protection for com-
plex software,” https://developer.arm.com/documentation/102433/
0100, 2022, [Online; accessed 18-February-2023].
[22] Tom Garrison, “Intel CET Answers Call to Protect Against Common
Malware Threats,” https://newsroom.intel.de/editorials/intel-cet-
answers-call- to-protect- against- common-malware- threats/, 2020,
[Online; accessed 13-February-2023].
[23] A. J. Gaidis et al., “Fineibt: Fine-grain control-flow enforcement
with indirect branch tracking,” arXiv preprint arXiv:2303.16353,
2023.
[24] Apple, “Operating System Integrity,” https://support.apple.com/
guide/security/operating-system- integrity-sec8b776536b/web, 2021,
[Online; accessed 18-February-2023].
[25] A. Sharma, “This new Android 14 feature may be meant for the
Pixel 8,” https://www.androidauthority.com/android-14- advanced-
memory-protection- 3281197/, 2023, [Online; accessed 18-February-
2023].
[26] H. Liljestrand et al., “Pac it up: Towards pointer integrity using arm
pointer authentication.” in USENIX Security, 2019.
[27] S. Yoo et al., “In-kernel control-flow integrity on commodity oses
using arm pointer authentication,” in USENIX Security, 2022.
[28] H. Liljestrand et al., “Pacstack: an authenticated call stack.” in
USENIX Security, 2021.
[29] G. Serra et al., “Pac-pl: Enabling control-flow integrity with pointer
authentication in fpga soc platforms,” in RTAS. IEEE, 2022.
[30] H. Liljestrand et al., “Color my world: Deterministic tagging for
memory safety,” arXiv preprint arXiv:2204.03781, 2022.
[31] J. Ravichandran et al., “Pacman: attacking arm pointer authentica-
tion with speculative execution,” in ISCA, 2022.
[32] B. Azad, “Examining pointer authentication on the iphone xs, 2019,”
URl: https://googleprojectzero. blogspot. com/2019/02/examining-
pointer-authentication-on. html (visited on 07/27/2021), 2021.
[33] ——, “iOS Kernel PAC, One Year Later,” https://bazad.github.io/
presentations/BlackHat-USA- 2020-iOS Kernel PAC One Year
Later.pdf, 2020, [Online; accessed 18-February-2023].
[34] T. Abera et al., “C-flat: control-flow attestation for embedded sys-
tems software,” in CCS, 2016.
[35] I. D. O. Nunes et al., “Tiny-cfa: A minimalistic approach for control-
flow attestation using verified proofs of execution,” DATE, 2021.
[36] G. Dessouky et al., “Lo-fat: Low-overhead control flow attestation
in hardware,” in DAC, 2017.
[37] J. Wang et al., “Ari: Attestation of real-time mission execution
integrity,” 2023.
[38] A. Caulfield et al., “Acfa: Secure runtime auditing & guaranteed de-
vice healing via active control flow attestation,” in USENIX Security.
USENIX, 2023.
[39] A. Azevedo de Amorim et al., “The meaning of memory safety,”
in Principles of Security and Trust: 7th International Conference,
POST 2018, Held as Part of the European Joint Conferences on
Theory and Practice of Software, ETAPS 2018, Thessaloniki, Greece,
April 14-20, 2018, Proceedings 7. Springer, 2018, pp. 79–105.
[40] H. Xu et al., “Memory-safety challenge considered solved? an in-
depth study with all rust cves,” ACM Transactions on Software
Engineering and Methodology (TOSEM), vol. 31, no. 1, pp. 1–25,
2021.
[41] B. Qin et al., “Understanding memory and thread safety practices
and issues in real-world rust programs,” in Proceedings of the 41st
ACM SIGPLAN Conference on Programming Language Design and
Implementation, 2020, pp. 763–779.
[42] D. Midi et al., “Memory safety for embedded devices with
nescheck,” in Proceedings of the 2017 ACM on Asia Conference
on Computer and Communications Security, 2017, pp. 127–139.
[43] N. D. Matsakis et al., “The rust language,” ACM SIGAda Ada
Letters, 2014.
[44] A. S. Elliott et al., “Checked c: Making c safe by extension,” in
SecDev. IEEE, 2018.
[45] T. Nyman et al., “Toward hardware-assisted run-time protection,”
2020.
[46] R. N. Watson et al., “Cheri: A hybrid capability-system architecture
for scalable software compartmentalization,” in S&P. IEEE, 2015.
[47] R. Wahbe et al., “Efficient software-based fault isolation,” in SOSP,
1993.
[48] H. Shacham et al., “On the effectiveness of address-space random-
ization,” in CCS, 2004.
[49] D. Song et al., “Sok: Sanitizing for security,” in S&P. IEEE, 2019.
[50] P. Larsen et al., “Sok: Automated software diversity,” in S&P.
IEEE, 2014.
[51] N. Burow et al., “Sok: Shining light on shadow stacks,” in S&P.
IEEE, 2019.
[52] L. Cheng et al., “Exploitation techniques for data-oriented attacks
with existing and potential defense approaches,” ACM Transactions
on Privacy and Security (TOPS), 2021.
[53] P. Godefroid, “Fuzzing: Hack, art, and science,” Communications of
the ACM, 2020.
[54] Microsoft, “Secure the Windows Boot Process,” https:
//learn.microsoft.com/en-us/windows/security/operating-system-
security/system-security/secure- the-windows- 10-boot-process,
2023, [Online; accessed 18-February-2024].
[55] Z. Tao et al., “DICE*: A Formally Verified Implementation of DICE
Measured Boot,” in USENIX Security, 2021.
[56] A. Steffen, “The linux integrity measurement architecture and tpm-
based network endpoint assessment,” Linux Security Summit, 2012.
[57] A. Holdings, “Armv8 architecture reference manual for a-profile
architecture,” 2022.
[58] Y. Chen et al., “Norax: Enabling execute-only memory for cots
binaries on aarch64,” in S&P. IEEE, 2017.
[59] V. Kuznetzov et al., “Code-pointer integrity,” in The Continuing
Arms Race: Code-Reuse Attacks and Defenses, 2018.
[60] K. Serebryany et al., “Memory tagging and how it improves c/c++
memory safety,” arXiv preprint arXiv:1802.09517, 2018.
[61] G. S. Kc et al., “Countering code-injection attacks with instruction-
set randomization,” in CCS, 2003.
[62] P. Rajasekaran et al., “CoDaRR: Continuous Data Space Random-
ization against Data-only Attacks,” in AsiaCCS, 2020.
[63] M. Castro et al., “Securing software by enforcing data-flow in-
tegrity,” in OSDI, 2006.
[64] Y. Li et al., “Finding cracks in shields: On the security of control
flow integrity mechanisms,” in CCS, 2020.
[65] S. Han et al., “Page-oriented programming: Subverting control-flow
integrity of commodity operating system kernels with non-writable
code pages,” in USENIX Security, 2024.
[66] PaX Team, “Non-Executable Pages Design & Implementation,”
https://pax.grsecurity.net/docs/noexec.txt, 2003, [Online; accessed
18-February-2023].
[67] I. D. O. Nunes et al., “VRASED: A verified Hardware/Software
Co-Design for remote attestation,” in USENIX Security, 2019.
[68] M. Ammar et al., “Simple: A remote attestation approach for
resource-constrained iot devices,” in ICCPS. IEEE, 2020.
[69] R. Sailer et al., “Design and implementation of a tcg-based integrity
measurement architecture.” in USENIX Security, 2004.
[70] I. De Oliveira Nunes, S. Jakkamsetti, N. Rattanavipanon, and
G. Tsudik, “On the toctou problem in remote attestation,” in Pro-
ceedings of the 2021 ACM SIGSAC Conference on Computer and
Communications Security, 2021, pp. 2921–2936.
[71] Z. Sun et al., “Oat: Attesting operation integrity of embedded
devices,” in S&P. IEEE, 2020.
[72] G. Dessouky et al., “Litehax: lightweight hardware-assisted attesta-
tion of program execution,” in ICCAD. IEEE, 2018.
[73] I. D. O. Nunes et al., “DIALED: Data Integrity Attestation for Low-
end Embedded Devices,” in DAC. IEEE, 2021.
[74] Trusted Computing Group, “Trusted Platform Module (TPM),”
https://trustedcomputinggroup.org/resource/trusted-platform-
module-tpm- summary/, 2008, [Online; accessed 18-February-
2024].
[75] ——, “Device Identifier Composition Engine (DICE),”
https://trustedcomputinggroup.org/what-is-a-device-identifier-
composition-engine- dice/, 2021, [Online; accessed 18-February-
2024].
[76] V. Costan and S. Devadas, “Intel SGX explained,” Cryptology ePrint
Archive, Report 2016/086, 2016. https://eprint.iacr.org/2016/086,
Tech. Rep.
[77] ARM Security Technology - Building a Secure System using Trust-
Zone Technology, ARM Limited, 2009.
[78] A. Ltd, “Trustzone technology for armv8-m architecture version
2.1,” https://developer.arm.com/documentation/100690/0201/, 2019.
[79] D. Lee, D. Kohlbrenner, S. Shinde, K. Asanovi´
c, and D. Song,
“Keystone: An open framework for architecting trusted execution
environments,” in Proceedings of the Fifteenth European Conference
on Computer Systems, 2020, pp. 1–16.
[80] M. Armanuzzaman and Z. Zhao, “Byotee: Towards building your
own trusted execution environments using fpga,” arXiv preprint
arXiv:2203.04214, 2022.
[81] H. Hu et al., “Enforcing unique code target property for control-flow
integrity,” in CCS, 2018.
[82] B. Niu et al., “Per-input control-flow integrity,” in CCS, 2015.
[83] K. Lu et al., “Where does it go? refining indirect-call targets with
multi-layer type analysis,” in CCS, 2019.
[84] A. J. Neto et al., “Isc-flat: On the conflict between control flow
attestation and real-time operations,” in RTAS. IEEE, 2023.
[85] S. Zeitouni et al., “Atrium: Runtime attestation resilient under
memory attacks,” in ICCAD. IEEE, 2017.
[86] M. Zhang et al., “Control flow integrity for cots binaries,” in
USENIX Security, 2013.
[87] C. Zhang et al., “Practical control flow integrity and randomization
for binary executables,” in S&P. IEEE, 2013.
[88] J. Criswell et al., “Kcofi: Complete control-flow integrity for com-
modity operating system kernels,” in S&P. IEEE, 2014.
[89] B. Niu et al., “Modular control-flow integrity,” in PLDI, 2014.
[90] A. J. Mashtizadeh et al., “Ccfi: Cryptographically enforced control
flow integrity,” in CCS, 2015.
[91] M. Zhang et al., “Control flow and code integrity for cots binaries:
An effective defense against real-world rop attacks,” in ACSAC,
2015.
[92] V. Mohan et al., “Opaque control-flow integrity.” in NDSS, 2015.
[93] M. Payer et al., “Fine-grained control-flow integrity through binary
hardening,” in DIMVA. Springer, 2015.
[94] V. Van Der Veen et al., “A tough call: Mitigating advanced code-
reuse attacks at the binary level,” in S&P. IEEE, 2016.
[95] X. Ge et al., “Fine-grained control-flow integrity for kernel soft-
ware,” in EuroS&P. IEEE, 2016.
[96] ARM, “BTI,” https://developer.arm.com/documentation/ddi0602/
2021-12/Base- Instructions/BTI-- Branch- Target-Identification-,
2020, [Online; accessed 13-February-2023].
[97] ——, “Return Address Signing using ARM Pointer Authentication,”
https://gcc.gnu.org/legacy-ml/gcc-patches/2018- 11/msg00104.html,
2018, [Online; accessed 13-February-2023].
[98] M. Ismail et al., “Tightly seal your sensitive pointers with
{PACTight},” in USENIX Security, 2022.
[99] N. Carlini et al., “Rop is still dangerous: Breaking modern defenses,”
in USENIX Security, 2014.
[100] L. Davi et al., “Stitching the gadgets: On the ineffectiveness of
coarse-grained control-flow integrity protection,” in USENIX Secu-
rity, 2014.
[101] E. G ¨
oktas et al., “Out of control: Overcoming control-flow integrity,”
in S&P. IEEE, 2014.
[102] E. G ¨
oktas¸ et al., “Size Does Matter: Why Using Gadget-Chain
Length to Prevent Code-Reuse Attacks is Hard,” in USENIX Se-
curity, 2014.
[103] F. Schuster et al., “Evaluating the effectiveness of current anti-rop
defenses,” in RAID. Springer, 2014.
[104] V. Van der Veen et al., “Practical context-sensitive cfi,” in CCS,
2015.
[105] R. Ding et al., “Efficient Protection of Path-Sensitive Control Secu-
rity,” in USENIX Security, 2017.
[106] M. Werner et al., “Sponge-based control-flow protection for iot
devices,” in EuroS&P. IEEE, 2018.
[107] M. R. Khandaker et al., “Origin-sensitive control flow integrity,” in
USENIX Security, 2019.
[108] M. Khandaker et al., “Adaptive call-site sensitive control flow in-
tegrity,” in EuroS&P. IEEE, 2019.
[109] N. S. Almakhdhub et al., “µRAI: Securing Embedded Systems with
Return Address Integrity,” in NDSS, 2020.
[110] M. Ismail et al., “Vip: safeguard value invariant property for thwart-
ing critical memory corruption attacks,” in CCS, 2021.
[111] Intel, “Intel Processor Trace,” https://edc.intel.com/content/www/us/
en/design/ipla/software-development-platforms/client/platforms/
alder-lake-desktop/12th-generation- intel-core- processors-
datasheet-volume-1-of- 2/010/intel-processor-trace/, 2015, [Online;
accessed 13-February-2024].
[112] X. Tan et al., “Sherloc: Secure and holistic control-flow violation
detection on embedded systems,” in CCS, 2023.
[113] I. D. O. Nunes et al., “{APEX}: A verified architecture for proofs
of execution on remote devices under full software compromise,” in
USENIX Security, 2020.
[114] D. Papamartzivanos et al., “Towards efficient control-flow attestation
with software-assisted multi-level execution tracing,” in MeditCom.
IEEE, 2021.
[115] G. Ramalingam, “The undecidability of aliasing,” TOPLAS, 1994.
[116] R. Baldoni et al., “A survey of symbolic execution techniques,”
CSUR, 2018.
[117] F. Toffalini et al., “{ScaRR}: Scalable runtime remote attestation for
complex systems,” in 22nd International Symposium on Research in
Attacks, Intrusions and Defenses (RAID 2019), 2019.
[118] Microsoft, “Control Flow Guard for platform security,”
https://learn.microsoft.com/en-us/windows/win32/secbp/control-
flow-guard, 2022, [Online; accessed 13-February-2023].
[119] D. Kuzhiyelil et al., “Towards transparent control-flow integrity in
safety-critical systems,” in ISC. Springer, 2020.
[120] M. Geden et al., “Hardware-assisted remote runtime attestation for
critical embedded systems,” in PST. IEEE, 2019.
[121] Y. Zhang et al., “Recfa: resilient control-flow attestation,” in ACSAC,
2021.
[122] M. Morbitzer et al., “Guarantee: Introducing control-flow at-
testation for trusted execution environments,” arXiv preprint
arXiv:2202.07380, 2022.
[123] N. Christoulakis et al., “Hcfi: Hardware-enforced control-flow in-
tegrity,” in ACM CODASPY, 2016.
[124] X. Ge et al., “Griffin: Guarding control flows using intel processor
trace,” ACM SIGPLAN Notices, 2017.
[125] M. Bauer et al., “Typro: Forward cfi for c-style indirect function
calls using type propagation,” in ACSAC, 2022.
[126] L. Davi et al., “Hafix: Hardware-assisted flow integrity extension,”
in DAC, 2015.
[127] J. Zhou et al., “Silhouette: Efficient protected shadow stacks for
embedded systems,” in USENIX Security, 2020.
[128] LLVM Community, “Safe Stack,” https://clang.llvm.org/docs/
SafeStack.html, 2014, [Online; accessed 18-February-2023].
[129] ——, “Shadow Call Stack,” https://clang.llvm.org/docs/
ShadowCallStack.html, 2012, [Online; accessed 18-February-
2023].
[130] Y. Cheng et al., “Ropecker: A generic and practical approach for
defending against rop attack,” in NDSS, 2014.
[131] V. Pappas, “kbouncer: Efficient and transparent rop mitigation,” Apr,
2012.
[132] G. F. Roglia et al., “Surgically returning to randomized lib (c),” in
ACSAC. IEEE, 2009.
[133] T. Nyman et al., “Cfi care: Hardware-supported call and return
enforcement for commercial microcontrollers,” in RAID. Springer,
2017.
[134] Y. Gu et al., “Pt-cfi: Transparent backward-edge control flow vi-
olation detection using intel processor trace,” in ACM CODASPY,
2017.
[135] V. Pappas et al., “Transparent ROP: exploit mitigation using indirect
branch tracing,” in USENIX Security, 2013.
[136] N. Yadav et al., “Whole-program control-flow path attestation,” in
CCS, 2023.
[137] B. Niu et al., “Rockjit: Securing just-in-time compilation using
modular control-flow integrity,” in CCS, 2014.
[138] Intel, “Intel 64 and ia-32 architectures software developer’s manual,”
2018.
[139] E. Aliaj et al., “Garota: generalized active root-of-trust architecture,”
arXiv preprint arXiv:2102.07014, 2021.
[140] T. Frassetto et al., “”cfinsight: A comprehensive metric for cfi
policies”,” in NDSS, 2022.
[141] F. Schuster et al., “Counterfeit object-oriented programming: On the
difficulty of preventing code reuse attacks in c++ applications,” in
S&P. IEEE, 2015.
[142] R. Gawlik et al., “Towards automated integrity protection of c++
virtual function tables in binary programs,” in ACSAC, 2014.
[143] C. Zhang et al., “Vtint: Defending virtual function tables’ integrity,”
in NDSS, 2015.
[144] M. Theodorides et al., “Breaking active-set backward-edge cfi,” in
HOST. IEEE, 2017.