Available via license: CC BY-NC-SA 4.0
Content may be subject to copyright.
Towards a General-Purpose Dynamic Information
Flow Policy
Peixuan Li, Danfeng Zhang
Department of Computer Science and Engineering
Pennsylvania State University, University Park, PA United States
e-mail: {pzl129,zhang}@cse.psu.edu
Abstract—Noninterference offers a rigorous end-to-end guar-
antee for secure propagation of information. However, real-world
systems almost always involve security requirements that change
during program execution, making noninterference inapplicable.
Prior works alleviate the limitation to some extent, but even for a
veteran in information flow security, understanding the subtleties
in the syntax and semantics of each policy is challenging, largely
due to very different policy specification languages, and more
fundamentally, semantic requirements of each policy.
We take a top-down approach and present a novel information
flow policy, called Dynamic Release, which allows information
flow restrictions to downgrade and upgrade in arbitrary ways.
Dynamic Release is formalized on a novel framework that,
for the first time, allows us to compare and contrast various
dynamic policies in the literature. We show that Dynamic Release
generalizes declassification, erasure, delegation and revocation.
Moreover, it is the only dynamic policy that is both applicable
and correct on a benchmark of tests with dynamic policy.
I. INTRODUCTION
While noninterference [28] has become a clich´
e for end-
to-end data confidentiality and integrity in information flow
security, this well-accepted concept only describes the ideal
security expectations in a static setting, i.e., when data sensitiv-
ity does not change throughout program execution. However,
real-world applications almost always involve some dynamic
security requirements, which motivates the development of
various kinds of dynamic information flow policies:
•Adeclassification policy [7], [10], [22], [46], [26], [27],
[37], [45] weakens noninterference by deliberately re-
leasing (i.e., declassifying) sensitive information. For in-
stance, a conference management system typically allows
deliberate release of paper reviews and acceptance/rejec-
tion decisions after the notification time.
•An erasure policy [18], [19], [33], [23], [30], [5] strength-
ens noninterference by requiring some public information
to become more sensitive, or be erased completely when
certain condition holds. For example, a payment system
should not retain any record of credit card details once
the transaction is complete.
•An delegation/revocation policy [3], [32], [50], [38]
updates dynamically the sensitivity roles in a security
system to accommodate the mutable requirements of
security, such as delegating/revoking the access rights of
a new/leaving employee.
Moreover, there are a few case studies on the needed secu-
rity properties in the light of one specific context or task [6],
[31], [43], [49], and build systems that provably enforces
some variants of declassification policy (e.g., CoCon [34],
CosMeDis [12]) and erasure policy (e.g., Civitas [21]).
Although the advances make it possible to specify and verify
some variants of dynamic policy, cherry-picking the appropri-
ate policy is still a daunting task: different policies (even when
they belong to the same kind) have very different syntax for
specifying how a policy changes [47], very different nature
of the security conditions (i.e., noninterference, bisimulation
and epistemic [16]) and even completely inconsistent notion
of security (i.e., policies might disagree on whether a program
is secure or not [16]). So even for veteran researchers in
information flow security, understanding the subtleties in the
syntax and semantics of each policy is difficult, evidenced by
highly-cited papers that synthesize existing knowledge on de-
classification policy [47] and dynamic policy [16]. Arguably, it
is currently impossible for a system developer/user to navigate
in the jungle of unconnected policies (even for the ones in the
same category) when a dynamic policy is needed [16], [47].
In this paper, we take a top-down approach and propose
Dynamic Release, the first information flow policy that enables
declassification, erasure, delegation and revocation at the same
time. One important insight that we developed during the
process is that erasure and revocation both strengthen an
information flow policy, despite their very different syntax
in existing work. However, an erasure policy by definition
disallows the same information leaked in the past (i.e., before
erasure) to be released in the future, while most revocation
policies allow so. This motivates the introduction of two kinds
of policies, which we call persistent and transient policies.
The distinction can be interpreted as a type of information
flow which is permitted by some definitions but not by others,
called facets [16].
Moreover, Dynamic Release is built on a novel formaliza-
tion framework that is shown to subsume existing security con-
ditions that are formalized in different ways (e.g., noninterfer-
ence, bisimulation and epistemic [16]). More importantly, for
the first time, the formalization framework allows us to make
apple-to-apple comparison among existing policies, which are
incompatible before (i.e., one cannot trivially convert one
to another). Besides the distinction between persistent and
transient policies mentioned earlier, we also notice that it is
more challenging to define a transient policy (e.g., erasure), as
it requires a definition of the precise knowledge gained from
1
arXiv:2109.08096v1 [cs.CR] 16 Sep 2021
1// bid :S
2submit := bid;
3output(submit, S);
4// bid :P
5output(submit, P);
1// credit_card :M
2copy := credit_card;
3output(copy, M);
4// credit_card :>
5copy := 0;
6output(copy, M);
1//book :bk, notes :Alice
2//bk →Alice
3notes := half (book);
4output(notes, Alice);
5//bk 6→ Alice
6output(notes, Alice);
(i). Secure Program (i). Secure Program (i). Secure Program
1// bid :S
2submit := bid;
3output(submit, P);
4// bid :P
5output(submit, P);
1// credit_card :M
2copy := credit_card
3output(copy, M);
4// credit_card :>
5// No Clear Up
6output(copy, M);
1//book :bk, notes :Alice
2//bk →Alice
3notes := half (book);
4output(notes, Alice);
5//bk 6→ Alice
6output(book, Alice);
(ii). Insecure Program (ii). Insecure Program (ii). Insecure Program
A. Declassfication B. Erasure C. Delegate/Revoke
Fig. 1. Examples of Dynamic Policies.
observing one output event, rather than the more standard cu-
mulative knowledge that we see in existing persistent policies.
Finally, we built a new AnnTrace benchmark for testing
and understanding variants of dynamic policies in general.
The benchmark consists of examples with dynamic policies
from existing papers, as well as new subtle examples that we
created in the process of understanding dynamic policies. We
implemented our policy and existing policies, and found that
Dynamic Release is the only one that is both applicable and
correct on all examples.
To summarize, this paper makes the following contributions:
1) We present a language abstraction with concise yet ex-
pressive security specification (Section III) that allows us
to specify various existing dynamic policies, including
declassification, erasure, delegation and revocation.
2) We present a new policy Dynamic Release (Section IV).
The new definition resolves a few subtle pitfalls that we
found in existing definitions, and its security condition
handles transient and persistent policies in a uniform way.
3) We generalize the novel formalization framework behind
Dynamic Release and show that it, for the first time, al-
lows us to compare and contrast various dynamic policies
at the semantic level (Section V). The comparison leads
to new insights that were not obvious in the past, such
as whether an existing policy is transient or persistent.
4) We build a new benchmark for testing and understand-
ing dynamic policies, and implemented our policy and
existing ones (Section VI). Evaluation on the benchmark
suggests that Dynamic Release is the only one that is
both applicable and correct on all examples.
II. BACKGROU ND A ND OV ERVIEW
A. Security Levels
As standard in information flow security, we assume the
existence of a set of security levels L, describing the intended
confidentiality of information1. For generality, we do not as-
1Since integrity is the dual of confidentiality, we will assume confidentiality
hereafter.
sume that all levels form a Denning-style lattice. For instance,
delegation and revocation typically use principals/roles (such
as Alice,Bob) where the acts-for relation on principals can
change at run time. For simplicity, we use the notation ∈ L
if all levels form a lattice L, rather than L∈L. Moreover,
we use P(public), S(secret) to represent levels in a standard
two-point lattice where P<Sbut S6<P.
B. Terminology
Some terms in dynamic policy are overloaded and used
inconsistently in the literature. For instance, declassification
is sometimes confused with dynamic policy [16]. To avoid
confusion, we first define the basic terminology that we use
throughout the paper.
Definition 1 (Dynamic (Information Flow) Policy): An in-
formation flow policy is dynamic if it allows the sensitivity of
information to change during one execution of a program.
As standard, we say that a change of sensitivity is down-
grading (resp. upgrading) if it makes information less sensitive
(resp. more sensitive).
Next, we use the examples in Figure 1to introduce the major
kinds of dynamic policies in the literature. For readability,
we use informal security specification in comments for most
examples in the paper; a formal specification language is given
in Section III.
a) Declassification: Given a Denning-style lattice L,
declassification occurs when a piece of information has its
sensitivity level 1downgraded to a lower sensitivity level 2
(i.e., 2<1). Consider Figure 1-A which models an online
bidding system. When bidders submit their bids to the system
during the bidding phase, each bid is classified that no other
bidders are allowed to learn the information. When the bidding
ends, the bids are public to all bidders. In the secure program
(i), the bid is only revealed to a public channel with level P
(Line 5) when bidding ends. However, the insecure program
(ii) leaks the bid during the bidding phase (Line 3).
b) Erasure: Given a Denning-style lattice L, information
erasure occurs when a piece of information has its sensitivity
2
level 1upgraded to a more restrictive sensitivity level, or
an incomparable level 2(i.e., 26v 1). Moreover, when
information is erased to level >, the sensitive information
must be removed from the system as if it was never inputted
into the system. Figure 1-B is from a payment system. The
user of the system gives her credit card information to the
merchandiser (at level M) as payment for her purchase. When
the transaction is done, the merchandiser is not allowed to
retain/use the credit card information for any other purpose
(i.e., its level changes to >). The secure program (i) only uses
the credit card information during the transaction (Line 3), and
any related information is erased after the transaction (Line 5).
The insecure program (ii), however, fails to protect the credit
card information after the transaction (Line 6).
c) Delegation and Revocation: Delegation and revoca-
tion are typically used together, in a principal/role-based
system [1], [25], [41]. In this model, information is associated
with principals/roles, and a dynamic policy is specified as
changes (i.e., add or remove) to the “acts-for” relationship on
principals/roles. Figure 1-C is from a book renting system,
where its customers are allowed to read books during the
renting period. In this example, Alice acts-for bk (bk →
Alice) before line 3. Hence, she is allowed to take notes
from the book. When the renting is over, the book is no
longer accessible to Alice (bk 6→ Alice), but the notes
remain accessible to Alice. The secure program (i) allows
the customer to get their notes (Line 6) learned during the
renting period. The insecure program (ii) fails to protect the
book (Line 6) after the renting is over.
C. Overview
We use Figure 1to highlight two major obstacles of
understanding/applying various kinds of dynamic policies.
First, we note that a delegation/revocation policy (Example
C) and an erasure policy (Example B) use different formats
to model sensitivity change. A delegation/revocation policy
attaches fixed security levels to data throughout program
execution; policy change is modeled as changing the acts-
for relation on roles. On the other hand, an erasure policy
uses a fixed lattice throughout program execution; policy
change is modeled as mutable security levels on data. These
two examples are similar from policy change perspective, as
they are both upgrading policies. But due to the different
specification formats, their relation becomes obscure.
Second, we note that Example B.ii and C.i are semantically
very similar: both examples first read data when the policy
allows so, and then try to access the data again when the
policy on data forbids so. However, B.ii is considered insecure
according to an erasure policy, while C.i is considered secure
according to a revocation policy. Even when we only consider
policies of the same kind (e.g., delegation/revocation), such
inconsistency in the security notion also exists, which is called
facets of dynamic policies [16].
Broberg et al. [16] have identified a few facets, but identi-
fying other differences among existing policies is extremely
difficult, as they are formalized in different nature (e.g.,
noninterference, bisimulation and epistemic). We can peek at
the semantics-level differences based on a few examples, but
an apple-to-apple comparison is still impossible at this point.
In this paper, we take a top-down approach that rethinks
dynamic policy from scratch. Instead of developing four kinds
of policies seen in prior work, we observe that there are only
two essential building blocks of a dynamic policy: upgrading
and downgrading. With an expressive specification language
syntax (Section III), we show that in terms of upgrading and
downgrading sensitivity, declassification (resp. erasure) is the
same as delegation (resp. revocation). In terms of the formal
security condition of dynamic policy, we adopt the epistemic
model [7] and develop a formalization framework that can be
informally understood as the following security statement:
A program cis secure iff for any event tpro-
duced by c, the “knowledge” gained about secret
by learning tis bounded by what’s allowed by the
policy at t.
We note that a key challenge of a proper security definition
for the statement above is to properly define the “knowledge”
of learning a single event t. During the process of developing
the formal definition, we discovered a new facet of upgrading
policies; the difference is that whether an upgrading policy
automatically allows information leakage (after upgrading)
when it has happened in the past. Consequently, we pre-
cisely define the “knowledge” of learning a single event and
make semantics-level choices (called transient and persistent
respectively) of the new facet explicit in Dynamic Release
(Section IV).
To compare and contrast various dynamic policies (includ-
ing Dynamic Release), we cast existing policies into the for-
malization framework behind Dynamic Release (Section V).
We find that the semantics of erasure and revocation are
drastically different: erasure policy is transient by definition,
and most revocation policies are persistent. The semantics-
level difference sheds light on why Example B.ii and C.i have
inconsistent security under erasure and revocation policies,
even though they are similar programs.
III. DYNAMIC POLICY SPE CI FIC ATIO N
We first present the syntax of an imperative language with
its security specification. Based on that, we show that the
policy specification is powerful enough to describe declassifi-
cation, erasure, delegation and revocation policies. Finally, we
define a few notations to be used throughout the paper.
A. Language Syntax and Security Specification
In this paper, we use a simple imperative language with
expressive security specification, as shown in Figure 2. The
language provides standard features such as variables, as-
signments, sequential composition, branches and loops. Other
features are introduced for security:
3
Variables (Vars)x, y, z
Events (S)s
Expressions (E)e::= x|n|eop e
Commands c::= skip |c1;c2|x:= e|while (e)c
|if (e)then c1else c2|output(b, e)
|EventOn(s)|EventOff(s)
Level Sets (L)L
Security Labels (B)b::= L|cnd?b1◦b2;
Conditions cnd ::= s|e|cnd ∧cnd
|cnd ∨cnd | ¬cnd
Mutation Directions ◦::= −→ | ←− |
Policy Specification Γ : Vars 7→ B[]
Policy Type ::= T ran |P er
Fig. 2. Language Syntax with Security Specification.
•We explicitly model information release by a release
command output(b, e); it reveals the value of expression
eto an information channel with security label b.2
•We introduce distinguished security events S. An event
s∈Sis similar to a Boolean; we distinguish sand
xin the language syntax to ensure that security events
can only be set and unset using distinguished commands
EventOn(s)and EventOff(s), which set sto true and
false respectively. We assume that all security events
are initialized with false.
1) Sensitivity Levels: For generality, we assume a prede-
fined set Lof all security levels, and use level set L⊆Lto
specify data sensitivity. Intuitively, a level set Lconsists of
a set of levels where the associated information can flow to.
Hence, L1is less restrictive as L2, written as L1<L2iff
L2⊂L1, and L1vL2iff L2⊆L1.
Although the use of level set is somewhat non-standard, we
note that it provides better generality compared with existing
specifications, such as a level from a Denning-style lattice [24]
or a role in a role-based model [1], [25], [41].
•Denning-style lattice: let Lbe a security lattice. We can
define Land the level set that represents ∈ L as follows:
L={|∈ L};L,{0∈ L | v0}(1)
Consider a two-point lattice {P,S}with P<S. It can be
written as the follows in our syntax:
L,{P,S};LS,{S};LP,{P,S};
•Role-based model: let Pbe a set of principals/roles and
actsfor be an acts-for relation on roles. We can define
Land the level set that represents P∈Pas follows:
L=P(P); LP,{P0∈P|P0actsfor P}(2)
2In the literature, it is also common to model information release as updates
to a memory portion visible to an attacker. This can be modeled explicitly
as requiring an assignment x:= ewhere xhas label bto emit a release
command output(b, v).
Consider a model with two roles Alice and Bob with
Alice actsfor Bob but not the other way around. It
can be written as the follows in our syntax:
L,{Alice,Bob};LAlice ,{Alice};
LBob ,{Alice,Bob};
2) Sensitivity Mutation: The core of specifying a dynamic
policy is to define how data sensitivity changes at run time.
This is specified by a security label b.
A label can simply be a level set L, which represents im-
mutable sensitivity throughout program execution. In general,
a label has the form of cnd?b1◦b2where:
•A trigger condition cnd specifies when the sensitivity
changes. There are two basic kinds of trigger conditions:
a security event sand a (Boolean) program expression e.
A more complicated condition can be constructed with
logical operations on sand e. We assume that a type
system checks that whenever cnd is an expression e,eis
of the Boolean type.
•The mutation direction ◦specifies how the information
flow restriction changes. There are two one-time mutation
directions: cnd?b1→b2(resp. cnd?b1←b2) allows a
one-time sensitivity change from b1to b2(resp. b2to b1)
the first time that cnd evaluates to false (resp. true).
On the other hand, a two-way mutation cnd?b1b2
allows arbitrary number of changes between b1and b2
whenever the value of cnd flips.
3) Policy Specification: The information flow policy on a
program is specified as a function from variables Vars to
security labels Band a policy type . The policy type can
either be transient, or persistent (formalized in Section IV).
B. Expressiveness
Despite the simplicity of our language syntax and security
specification, we first show that all kinds of dynamic policies
in Figure 1can be concisely expressed. Then, we discuss how
the specification covers the well-known what, who, where and
when dimensions [47], [48] of dynamic policies.3Finally, we
show that the specification language is powerful enough to
encode Flow Locks [13] and its successor Paralocks [15],
a well-known meta policy language for building expressive
information flow policies.
1) Examples: We first encode the examples in Figure 1.
a) Declassification and Erasure: Both policies specify
sensitivity changes as mutating security level of information
from some level 1to 2, where both 1and 2are drawn from
a Denning-style lattice L. Such a change can be specified as
L1→L2, where L1and L2are the level sets representing
1and 2, as defined in Equation (1).
For example, the informal policy on credit card in Fig-
ure 1-B can be precisely specified as erase?{} ← {M}[T ran]
3The original definitions focus on declassification policy, but the dimensions
are applicable for dynamic policies as well.
4
(we will discuss why erasure is a transient policy in Sec-
tion IV) with the security command EventOn(erase)being
inserted to Line 4 to trigger the mutation.
b) Delegation and revocation: Both policies specify
sensitivity changes as modifying the acts-for relationship on
principals, such as Alice and Bob. Such a change can be
specified as the old and new sets of roles who acts-for the
owner, say P, of information. That is, a change from from
actsfor1to actsfor2can be specified as L1→L2, where
Li,{P0∈P|P0actsforiP}.
For example, the policy on book in Figure 1-C can be
specified as revoke?{} ← {Alice}[P er](we will discuss
why revocation is a persistent policy in Section IV) with a
security command EventOn(revoke)being inserted to Line 5
to trigger the mutation. 4
2) Dimensions of dynamic policy [47], [48]:
a) What: The what dimension regulates what informa-
tion’s sensitivity is changed. Since the policy specification is
defined at variable level, our language does not fully support
partial release, which only releases a part of a secret (e.g., the
parity of a secret) to a public domain. However, we note that
the language still has some support of partial release. Consider
the example in Figure 1-C.i. The policy allows partial value
half (book)to be accessible by Alice after Line 5, while the
whole value of book is not. As shown in Section III-B1, the
partial release of half (book)in this example can be precisely
expressed in our language. We leave the full support of partial
release as future work.
Moreover, we emphasize that the policy specification regu-
lates the sensitivity on the original value of the variable. For
example, consider Γ(h) = S,Γ(x) = s?S→Pfor program:
x:= h;EventOn(s); output(P, x);
The policy on xstates that its original value, rather than its
value right before output (i.e., the value of h), is declassified
to P. Hence, the program is insecure. Therefore, the specifi-
cation language rules out laundering attacks [45], [47], which
launders secrets not intended for declassification.
b) Where: The where dimension regulates level locality
(where information may flow to) and code locality (where
physically in the code that information’s sensitivity changes).
It is obvious that a label cnd?b1◦b2declare where information
may flow to after a policy change, and the security event s
with the security commands EventOn(s)and EventOff(s)
specify the code locations where sensitivity changes.
c) When: The when dimension is a temporal dimension,
pertaining to when information’s sensitivity changes. This is
specified by the trigger condition cnd. For example, a policy
(paid?P←S)allows associated information (e.g., software
key) to be released when payment has been received. This is
an instance of “Relative” specification defined in [47].
d) Who: The who dimension specifies a principal/role,
who controls the change of sensitivity; one example is the
4We note that our encoding requires all changes to the acts-for relation
to be anticipated, whereas a general delegation/revocation policy might also
offer the flexibility of changing the acts-for relation dynamically.
// x: {D,N}⇒a
// y: {N}⇒a
// z: {}⇒a
open(D);
y:=x;
close(D);
open(N);
z:=y;
// x: sN?{a}{}
// y: sN?{a}{}
// z: {a}
EventOn(sD);
y:=x; output(sN?{a}{}, x)
EventOff(sD);
EventOn(sN);
z:=y;output({a}, z)
Fig. 3. An Example of Encoding Paralock for A=ha, {D}i.
Decentralized Label Model (DLM) [40], which explicitly
defines ownership in security labels. While our specification
language does not explicitly define ownership, we show next
that it is expressive enough to encode Flow Locks [13] and
Paralocks [15], which in turn are expressive enough to encode
DLM [15]. Hence, the specification language also covers the
who dimension to some extent.
3) Encoding Flow Locks [13]: Both Flow Locks [13] and
its successor Paralocks [15] introduce locks, denoted as σ, to
construct dynamic policies. Let Locks be a set of locks, and
Pbe a set of principals. A “flow lock” policy is specified with
the following components:
•Flow locks in the form of Σ⇒Pwhere Σ⊆Locks is
the lock set for principal P∈P.
•Distinguished commands open(σ),close(σ)that open
and close the lock σ∈Locks.
To simplify notation, we use Γ(x, P )=Σto denote the
fact that {Σ⇒P}is part of the “flow locks” of x. Paralocks
security is formalized as an extension of Gradual Release [7].
In particular, paralock security is defined based on sub-security
condition for each hypothetical attacker A= (PA,ΣA)where
PA∈Pand ΣA⊆Locks:
•A variable is considered “public” for attacker Awhen
Γ(x, PA) = Σx⊆ΣA; otherwise, it is considered
“secret” for attacker A.
•A “release event”, in gradual release sense, is defined as
a period of program execution when the set of opened
locks Σopen ⊆ΣA.
Hence, for each concrete A= (PA,ΣA), we can encode
Paralocks security as follows:
•We define a security event sσfor each lock σ∈Σand the
lock command open(σ)(resp. close(σ)) is converted to
EventOn(sσ)(resp. EventOff(sσ)).
•Let Γ(x) = {PA}when Γ(x, PA) = Σx⊆ΣA;
otherwise, Γ(x) = {} (i.e., secret for PA).
•Following the encoding of gradual release, we define
Γ0(x) = cnd?Γ(x) : {PA}where cnd ,¬Vσ6∈ΣAsσ,
i.e., all locks not in ΣAmust be currently closed, which
implies an output event (not a release event): Σopen ⊆
ΣA; otherwise, for a release event, xis public to PA.
As a concrete example, we show the original Paralocks code
and its transform code in Figure 3for A=ha, {D}i. We note
that under the encoding, the first assignment y:= xis under a
release event since only lock Dis open, which is a subset of
ΣA={D}; both the output channel and the value can be read
5
by a. On the other hand, the second assignment z:= yis not
under a release event, as an opened lock Nis not possessed
by attacker A. This is also reflected by the encoding: while the
output channel is observable to aunconditionally, the value of
yhas policy {} at that point, as sN=true.
Hence, we can encode Paralocks by explicitly checking the
security of each transformed program for each A, and accept
the program iff all transformed programs are secure.
C. Interpretation of Security Specification
Intuitively, the security specification in Figure 2specifies at
each program execution point, what is the sensitivity of the
associated information. We formalize this as an interpretation
function of the label, denoted as JbKτ, which takes in a label
band a trace τ, and returns a level set Las information flow
restrictions at the end of τ.
a) Execution trace: As standard, we model program
state, called memory m, as a mapping from program variables
and security events to their values. The small-step semantics of
the source language is mostly standard (hence omitted), with
exception of the output and security event commands:
he, mi ⇓ v
houtput(b, e), mihb,vi
−−−→ hskip, mi
S-OUT PUT
hEventOn(s), mi→hskip, m{s7→ true}i S-SET
hEventOff(s), mi → hskip, m{s7→ false}i S-UNS ET
The semantics records all output events, in the form of hb, vi,
during program execution, as these are the only information
release events during program execution. Moreover, the dis-
tinguished security events sare treated as boolean variables,
which can only be set/unset by the security event commands.
Based on the small-step semantics, executing a program c
under initial memory mproduces an execution trace τwith
potentially empty output events:
hc, mib1,v1
−−−→ hc1, m1i · · · bn,vn
−−−→ hcn, mni.
We use τ[i]to denote the configuration (i.e., a pair of
program and memory) after the i-th evaluation step in the τ,
and kτkto denote the number of evaluation steps in the trace.
For example, τ[0] is always the initial state of the execution,
τ[kτk]is the ending state of a terminating trace τ. We use τ[:i]
(resp. τ[i:]) to denote a prefix (resp. postfix) subtrace of τfrom
the initial state up to (starting from) the i-th evaluation step.
We use τ[i:j]to denote the subtrace of τbetween i-th and j-th
(inclusive) evaluation steps. Finally, we write τ14τ2when
τ1is a prefix of τ2.
b) Interpretation of labels: We formalize the label se-
mantics JbKτin Figure 4.JbKτreturns a level set Lthat
precisely specifies where the information with policy bcan
flow to at the end of trace τ. For a (static) level set L, its
interpretation is simply Lregardless of τ.
For more complicated labels, the semantics also considers
the temporal aspect of label changes. For example, a one-
JLKτ=L
Jcnd?b1→b2Kτ=(Jb1Kτ,first(cnd, τ, false) = −1
Jb2Kτ[i:] , i =first(cnd, τ, false)≥0
Jcnd?b1←b2Kτ=(Jb2Kτ,first(cnd, τ, true) = −1
Jb1Kτ[i:] , i =first(cnd, τ, true)≥0
Jcnd?b1b2Kτ=(Jb1Kτ[i+1:] , i =last(cnd, τ, false)6=kτk
Jb2Kτ[i+1:] , i =last(cnd, τ, true)6=kτk
where first(cnd, τ, bl)returns the first index of τsuch that cnd
evaluates to bl, or −1if such an index does not exist; last(cnd, τ, bl)
returns the last index of τsuch that cnd evaluates to bl, or −1if
such an index does not exist.
Fig. 4. Interpretation of Security Labels
time mutation label cnd?b1→b2allows a one-time sensitivity
change from b1to b2when the first time that cnd evaluates
to false. Hence, let ibe the first index of τsuch that cnd
evaluates to false. Then, Jcnd?b1→b2Kτreduces to Jb1Kτ
when no such iexists (i.e., cnd always evaluates to true
in τ), and it reduces to Jb2Kτ[i:] otherwise. Note that in the
latter case, it reduces to Jb2Kτ[i:] rather than Jb2Kτto properly
handle nested conditions: any nested condition in b2can only
be evaluated after cnd becomes false. The dual with ←
is defined in a similar way. Note that cnd?b1→b2and
¬cnd?b2←b1are semantically the same; we introduce both
for convenience.
Finally, the bi-directional label (with ) is interpreted
purely based on the last configuration of τ: let ibe the last
index in τsuch that cnd evaluates to false. Then, i6=kτk
implies that cnd evaluates to true at the end of τ; hence, the
label reduces to b1. Note that b1is evaluated under τ[i+1:] in
this case to properly handle (potentially) nested conditions in
b1: any nested condition in b1can only be evaluated after cnd
becomes true.
Moreover, we can derive a dynamic specification for each
execution point i, written as γi, such that
∀x. γi(x) = JΓ(x)Kτ[:i]
Additionally, we overload γito track the dynamic interpre-
tation of a label bfor each execution point i:
∀b. γi(b) = JbKτ[:i]
To simplify notation, we write
hc0, m0i→
t
if the execution hc0, m0iterminates5with an extended out-
put sequence
t, which consists of extended output events
t,hb, v, γ i, where b, v are the output events on τ, and γ
is the dynamic specification at the corresponding execution
5In this paper, we only consider output sequences
tproduced by
hc0, m0i→
t. Hence, only the terminating executions are considered in
this paper, making our knowledge and security definitions in Section IV
termination-insensitive. Termination sensitivity is an orthogonal issue to the
scope of this paper: dynamic policy.
6
point. We use t.b,t.v and t.γ to refer to each component in
the extended output event. We use the same index notation
as in trace, where
t[i]returns the i-th output event, and
t[:i]
returns the prefix output sequence up to (included) the i-th
output.
t[:0] returns an empty sequence.
IV. DYNA MI C REL EASE
In this section, we define Dynamic Release, an end-to-
end information flow policy that allows information flow
restrictions to downgrade and upgrade in arbitrary ways.
A. Semantics Notations
a) Memory Closure: For various reasons, we need to
define a set of initial memories that are indistinguishable from
some memory m. Given a set of variables X, we define the
memory closure of mto be a set of memory who agrees on
the value of each variable x∈X:
Definition 2 (Memory Closure): Given a memory mand a
set of variables X, the memory closure of mon Xis:
JmKX,{m0| ∀x∈X. m(x) = m0(x)}
For simplicity, we use the following short-hands:
JmKL,γ ,JmK{x|γ(x)⊆L}
JmK6=b,JmK{x|Γ(x)6=b}
where JmKL,γ is the memory closure on all variables whose
sensitivity level is less or equally restrictive than a level L
according to γ, and JmK6=bis the memory closure on variables
whose security policy is not b: a set of memories whose value
only differ on variables with policy b.
b) Trace filter: For various reasons, we need a filter on
output traces to focus on relevant subtraces (e.g., to filter out
outputs that are not visible to an attacker). Each trace filter
can be defined as a Boolean function on hb, v, γi. With a
filter function f(that returns false for irrelevant outputs),
we define the projection of outputs as follows:
Definition 3 (Projection of Trace):
b
tcf,hhb, v, γ i ∈
t|f(b, v, γ )i
We define the following short-hand for commonly used
filter, L-projection filter, where the resulting trace consists of
outputs currently observable at level L:
b
tcL,b
tcλb,v,γ. γ (b)⊆L
B. Key Factors of Formalizing a Dynamic Policy
Before formalizing Dynamic Release, we first introduce
knowledge-based security (i.e., epistemic security) [7], which
is widely used in the context of dynamic policy. Our formal-
ization is built on the following informal security statement,
which is motivated by [3]:
A program cis secure iff for any event tpro-
duced by c, the “knowledge” gained about secret by
observing tis bounded by what’s allowed by the
policy at t.
We first introduce a few building blocks to formalize
“knowledge” and “allowance” (i.e., the allowed leakage).
1) Indistinguishability: A key component of information
flow security is to define trace indistinguishability: whether
two program execution traces are distinguishable to an attacker
or not. Given an attacker at level set L, each release event
hb, v, γ iis visible iff γ(b)vLby the attack model. Hence,
as standard, we define an indistinguishability relation, written
as ∼L, on traces as
∼L,{(
t1,
t2)| b
t1cL4b
t2cL}
Note that an attacker cannot rule out any execution whose
prefix matches t1. Hence, the prefix relation is used instead of
identity.
2) Knowledge gained from observation: Following the
original definition of knowledge in [7], we define the knowl-
edge gained by an attacker at level set Lvia observing a trace
tproduced by a program cas:6
k1(c,
t, L),{m| hc, mi→
t0∧
t∼L
t0}(3)
Intuitively, it states that if one initial memory mproduces
a trace that is indistinguishable from
t, then the attacker
cannot rule out mas one possible initial memory. Note that
by definition, the smaller the knowledge set is, the more
information (knowledge) is revealed to the attacker.
Recall that by definition, hc, mi→
t0only considers termi-
nating program executions. Hence, the knowledge definition
above is the termination-insensitive version of knowledge
defined in [7]. As a consequence, the security semantics that
we define in this paper is also termination-insensitive.
3) Policy Allowance: To formalize security, we also need
to define for each output event ton a trace, what is the
allowed leakage to an attacker at a level set L. As knowledge,
policy allowance, written as A(m,
t, b, L), is defined as a set
of memories that should remain indistinguishable to the actual
initial memory mat the end of output sequence
t.
Consider a dynamic label b∈B, memory mand output
sequence
tof interest, as well as an attacker at level L, we
can define policy allowance as follows:
A(m,
t, b, L),JmK6=b
Intuitively, it specifies the initial knowledge of an attacker at
level set L: the attacker cannot distinguish any value difference
among variables with the dynamic label b. Thus, any variable
with the label bis initially indistinguishable to the attacker.
Eventually, Dynamic Release checks that for each label b∈B,
gained knowledge is bounded by the allowance with respect
to b. Hence, the security of each variable is checked.
C. Challenges of Formalizing a General Dynamic Policy
We next show that it is a challenging task to formalize
the security of a general-purpose dynamic policy that allows
downgrading and upgrading to occur in arbitrary ways.
6We slightly modified the original definition to exclude “initial knowledge”,
the attacker’s knowledge before executing the program.
7
Challenge 1: Permitting both increasing and decreasing
knowledge: Allowing both downgrading and upgrading in
arbitrary ways means that our general policy must permit
reasoning about both increasing knowledge (as in declassi-
fication) and decreasing knowledge (as in erasure). While
Equation 3and its variants are widely used to formalize
declassification policy [7], [15], they cannot reason about
increasing knowledge. For example, it is easy to check that
for any c,
t,
t0, L, we have
t4
t0⇒k1(c,
t, L)⊇k1(c,
t0, L)
according to Equation 3. As other variants, the knowledge set
k1is monotonically decreasing (hence, the knowledge that it
represents is increasing by definition) as more events on the
same execution are revealed to an attacker [7], [3], [51].
However, we need to reason about decreasing knowledge
for an erasure policy. Consider the example in Figure 1-
B, where the value of credit card is revealed by the first
output at Line 3. Given any program execution hc, mi→
t,
we have k1(c,
t[:i], M ) = {m}for all i≥1. However, as
the sensitivity of credit card upgrades from Mto >when
i= 2 (i.e., the second output), the secure program (i) can
be incorrectly rejected: k1(c,
t[:2], M ) = {m}means that the
value of credit card is known to the attacker, which violates
the erasure policy at that point.
Observation 1. Equation 3is not suitable for an upgrading
policy, since it fails to reason about decreasing knowledge.
The issue is that knowledge gained from
tis defined as the
full knowledge gained from observing all outputs on
t. Return
to the secure program in Figure 1-B.i. We note that the first
and second outputs together reveal the value of credit card,
but the second event alone reveals no information, as it always
outputs 0. Hence, we can precisely define the exact knowledge
gained from learning each output to permit both increasing and
decreasing knowledge.
Challenge 2: Indistinguishability ∼Lis inadequate for a
general dynamic policy: As shown earlier, indistinguishability
∼Lis an important component of a knowledge definition; in-
tuitively, by observing an execution hc, mi→
t, an attacker at
level set Lcan rule out any initial memory m0where m6∼Lm0
(i.e., m06∈ k1(c,
t, L)). However, the naive definition of ∼L
might be inadequate for declassified outputs. Consider the
following secure program, where xis first downgraded to P
and then upgraded to S.
1// x : P
2if (x>0) output(P,1);
3output(P,1)
4// x : S
5output(P,2)
Note that the program is secure since the only output when
xis secret reveals a constant value. Assume that the initial
value of xis either 0 or 1. Hence, there are two possible
executions of the program with γ1(x) = Pand γ2(x) = S:
hc, m1i→ hP,1, γ1i · hP,2, γ2i
hc, m2i→ hP,1, γ1i · hP,1, γ1i · hP,2, γ2i
The issue is in the first execution. By observing the first
output, an attacker at Pcannot tell if the execution starts from
m1or m2, as both of them first output 1. However, the attacker
can rule out m2by observing the second output with the value
of 2. Note that the change of knowledge (from {m1, m2}
to {m1}) violates the dynamic policy governing the second
output: the policy on xis S, which prohibits the learning of
the initial value of x.
Observation 2. The inadequacy of relation ∼Lroots from
the fact that, due to downgrading, the public outputs of
different executions might have various lengths. Therefore,
outputs at the same index but produced by different executions
might be incomparable. To resolve the issue, we observe that
any information release (of x) when xis Pis ineffective, in the
sense that the restriction on xis not in effect. In the example
above, the outputs with value 1are all ineffective, as xis
public when the outputs at lines 2 and 3 are produced. This
observation motivates the secret projection filter, which finds
out the effective outputs for a given secret.
Definition 4 (Secret Projection of Trace): Given a policy b
and an attacker at level L, a secret projection of trace is a
subtrace where information with policy bcannot flow to L
and the output channel is visible to L:
b
tcb,L ,b
tcλb0,n,γ. γ (b)6⊆L∧γ(b0)⊆L
Return to the example above, the effective subtraces starting
from m1and m2are both hP,2, γ2(x) = Si, which remains
indistinguishable to an attacker at level P.
Challenge 3: Effectiveness is also inadequate: With Obser-
vation 2, it might be attempting to define indistinguishability
based on b
tcb,L, rather than b
tcL. However, doing so is
problematic as shown by the following program.
1// x : S
2if (x>0) output(P, 1);
3// x : P
4if (x<=0) output(P, 1);
With two initial memories m1(x)=0, m2(x)=1, we have
hc, m1i→ hP,1, γ1(x) = Pi
hc, m2i→ hP,1, γ2(x) = Si
Note that only the value of xis revealed on the public channel.
Hence, the program is secure as it always outputs 1. However,
the effective subtrace starting from m1is ∅and that starting
from m2is hP,1, γ2(x) = Si, suggesting that the program is
insecure: the value of xis revealed by the first output from
m2, while the policy at that point (S) disallows so.
Observation 3. We note that both indistinguiability and
effectiveness are important building blocks of a general-
purpose dynamic policy. However, the challenge is how to
combine them in a meaningful way. We will build our security
definition on both concepts and justify why the new definition
is meaningful in Section IV-D.
Challenge 4: Transient vs. Persistent Policy: So far, the
policy allowance A(m)ignores what information has been
leaked in the past. However, in the persistent case such as
Figure 1-C, the learned information (note) remains accessible
8
even after the policy on book upgrades. In general, we define
transient and persistent policy as:
Definition 5 (Transient and Persistent Policy): A dynamic
security policy is persistent if it always allows to reveal
information that has been revealed in the past. Otherwise, the
policy is transient.
Observation 4. Both transient and persistent policy have
real-world application scenarios. Hence, a general-purpose
dynamic policy should support both kinds of policies, in a
unified way.
D. Dynamic Release
We have introduced all ingredients to formalize Dynamic
Release, a novel end-to-end, general-purpose dynamic policy.
To tackle the challenges above, we first formalize the
attacker’s knowledge gained by observing the last event t0
on a trace
t·t0. Note that simply computing the knowledge
difference between observing
t·t0and observing
tdoes
not work. Consider the example in Figure 1-B.ii. Given any
program execution hc, mi→
t, we have k1(c,
t[:i], M ) = {m}
for all i≥1. Hence, the difference between the knowledge
gained with or without the output at Line 6 is ∅, suggesting
that no knowledge is gained by observing the output at Line 6
alone, which is incorrect as it reveals the credit card number.
Instead, we take inspiration from probabilities to formalize
the attacker’s knowledge gained by observing a single event
on a trace. Consider a program cthat produces the following
sequences of numbers give the corresponding inputs:
input 1: s1= (1 ·1·3)
input 2: s2= (2 ·2·3)
input 3: s3= (1 ·1·3)
input 4: s4= (2 ·2·2)
Consider the following question: what is the probability that
the program generates a sequence where the last number is
identical to the last number of s1? Obviously, besides s1,
we also need to consider sequences s2and s3since albeit a
different sequence, s2is consistent with s1in the sense that the
last output is 3, and s3is indistinguishable (i.e., identical) to
s1. More precisely, we can compute the probability as follows:
Σs∈consist(s1)P(s)
where the consistent set consist(s1)is the set of sequences
that produce the same last number as s1, i.e., {(1 ·1·3),(2 ·
2·3)}. Assuming a uniform distribution on program inputs,
we have that the probability is P(1 ·1·3) + P(2 ·2·3) =
(0.25 + 0.25) + 0.25 = 0.75. Note that the indistinguishable
sequences s1and s3are implicitly accounted for in P(1·1·3).
To compute the knowledge associated with the last event
on a trace
t, we first use effectiveness to identify consistent
traces whose last event on the effective subset is the same:
Definition 6 (Consistency Relation): Two output sequences
t1and
t2are consistent w.r.t. a policy band an attack level L,
written as
t1≡b,L
t2if
n=kb
t1cb,Lk=kb
t2cb,Lk∧b
t1c[n]
b,L =b
t2c[n]
b,L
Note that despite the extra complicity due to trace projec-
tion, the consistency relation is similar to the consistent set
consist(s1)in the probability computation example. Next,
we define the precise knowledge gained from the last event
of
tbased on both the consistency relation and knowledge.
Note that since knowledge is a set of memories, rather than
a number, the summation in the probability case is replaced
by a set union. Similar to the probability of observing each
sequence, the knowledge k1also implicitly accounts for all
indistinguishable traces (Equation 3).
Definition 7 (Attacker’s Knowledge Gained from the Last
Event): For an attacker at level set L, the attacker’s knowledge
w.r.t. information with policy b, after observing the last event
of an output sequence
tof program c, is the set of all
initial memories that produce an output sequence that is
indistinguishable to some consistent counterpart of
t:
k2(c,
t, L, b) = [
∃m0,j. hc,m0i→
t0∧t0[:j]≡b,L
t
k1(c,
t0[:j], L)
To see how Definition 7tackles Challenges 2 and 3, we
revisit the code example under each challenge.
•Challenge 2: Recall that with m1(x)=0,m2(x)=1,
γ1(x) = Pand γ2(x) = S, there are two execution traces
hc, m1i→ hP,1, γ1i · hP,2, γ2i
hc, m2i→ hP,1, γ1i · hP,1, γ1i · hP,2, γ2i
It is easy to check that the two output sequences are
consistent according to Definition 6. Hence, in both
traces, the knowledge gained from the last output is
{m0, m1}, due to the big union in k2. Hence, we correctly
conclude that no information is leaked by the last output
in both traces.
•Challenge 3: Recall that with m1(x)=0,m2(x)=1,
γ1(x) = Pand γ2(x) = S, there are two execution traces
hc, m1i→ hP,1, γ1i
hc, m2i→ hP,1, γ2i
While the two traces are not consistent with each
other, we know that k1(c, hP,1, γ2(x) = Si,P) =
{hP,1, γ1(x) = Pi,hP,1, γ2(x) = Si} since the two
traces satisfy ∼P. Hence, the knowledge gained from the
last event is {m0, m1}, and we correctly conclude that
no information is leaked by the last output.
To tackle Challenge 4, we observe that a persistent policy
allows information leaked in the past to be released again,
while a transient policy disallows so. This is made precise by
the following refinement of policy allowance:
A(m,
t, b, L),(JmK6=b, b is transient
JmK6=b∩k1(c,
t[:k
tk−1], L), b is persistent
(4)
where k1(c,
t[:k
tk−1], L)is the knowledge from every output
event in
texcept the last one. Note that since the knowledge
9
here represents the cumulative knowledge gained from observ-
ing all events, we use the standard knowledge k1instead of
the knowledge gained from the last event k2here.
Putting everything together, we have Dynamic Release secu-
rity, where for any output of the program, the attacker’s knowl-
edge gained from observing the output is always bounded by
the policy allowance at that output point.
Definition 8 (Dynamic Release):
∀m, L ⊆L, b ∈B,
t. hc, mi→
t=⇒ ∀1≤i≤ k
tk.
k2(c,
t[:i], L, b)⊇(JmK6=b,transient
JmK6=b∩k1(c,
t[:i−1], L),persistent
V. SE MA NT IC S FRA ME WORK FOR DYNA MI C POLICY
While various forms of formal policy semantics exist in
the literature, different policies have very different nature of
the security conditions (i.e., noninterference, bisimulation and
epistemic [16]). In this section, we generalize the formalization
of Dynamic Release (Definition 8) by abstracting away its key
building blocks. Then we convert various existing dynamic
policies into the formalization framework and provide the first
apple-to-apple comparison between those policies.
A. Formalization Framework for Dynamic Policies
We first abstract way a few building blocks of Definition 8.
To define them more concretely, we consider an output se-
quence
tproduced by hc, mi, i.e., hc, mi→
t, as the context.
As already discussed in Section IV, the building blocks are:
•Output Indistinguishability, written as ∼: two output
sequences
t1and
t2satisfies
t1∼
t2when they are
considered indistinguishable to the attacker.
•Policy Allowance, written as A(m,
t, b, L): a set of initial
memory that should be indistinguishable to attacker at L
at the end of sequence
t.
•Consistency Relation, written as ≡: when trying to pre-
cisely define the knowledge gained from each output
event, two sequences are considered “consistent”, even
if they are not identical (Definition 6).
With the abstracted parameters, we first generalize the
knowledge definition of k1(Equation 3) on an arbitrary
relation ∼on output sequences:
Definition 9 (Generalized Knowledge):
K(c,
t, ∼),{m| hc, mi→
t0∧
t∼
t0}(5)
Therefore, with abstract ∼,A(m,
t, b, L)and ≡, we can
generalize Definition 8as the following framework:
Definition 10 (Formalization Framework): Given trace in-
distinguishability relation ∼, consistency relation ≡and policy
allowance A, a command csatisfies a dynamic policy iff the
knowledge gained from observing any output does not exceed
its corresponding policy allowance:
∀m, L ⊆L, b ∈B,
t. hc, mi→
t=⇒ ∀1≤i≤ k
tk.
[
∃m0,j. hc,m0i→
t∧
t0[:j]≡
t[:i]
K(c,
t0[:j],∼)⊇ A(m,
t[:i], b, L)
Let ∼DR,{(
t1,
t2)| b
t1cL4oumpb
t2cL},ADR be as
defined in Equation (4), and ≡DR be as defined in Definition 7,
it is easy to check that Definition 10 is instantiated to Defini-
tion 8.
Moreover, when ≡is instantiated with an equality relation
=, a case that we have seen in all existing dynamic policies,
the general framework can be simplified to the following form:
∀c, m, L ⊆L, b ∈B,
t. hc, mi→
t=⇒ ∀1≤i≤ k
tk.
K(c,
t[:i],∼)⊇ A(m,
t[:i], b, L)
We use this simpler form for any dynamic policy where
consistency is simply defined as equivalence.
B. Existing works in the formalization framework
Next, we incorporate existing definitions into the formal-
ization framework; the results are summarized in Table I. We
first highlight a few insights from Table I. Then, for each work
(except for Paralock due to space constraint), we sketch how
to convert it (with potentially different security specification
language and semantic formalization) into the specification
language in Figure 2and Definition 10 respectively. The
conversion of Paralock and the correctness proofs of all
conversions are available in the Supplementary Material.
1) Insights from Table I:To the best of our knowledge,
this is the first work that enables apple-to-apple comparison
between various dynamic policies. We highlight a few insights.
First, an erasure policy (e.g., According to Policy and
Cryptographic Erasure) defines indistinguishability ∼in a
substantially more complicated way compared with others.
The complexity suggests that formalizing an erasure policy
is more involved compared with other dynamic policies.
Second, besides Dynamic Release, Gradual Release, Par-
alock and Forgetful Attacker also have K(c,
t[:i−1],∼)as part
of policy allowance. Recall that K(c,
t[:i−1],∼)represents the
past knowledge excluding the last output on
t. Hence, these
policies are persistent policies. On the other hand, all other
dynamic policies are transient policies.
Third, since an erasure policy by definition is transient,
persistent policies such as Gradual Release and Paralock
cannot check erasure policy, such as the example in Figure 1-
B: leaking credit card after erasure violates the erasure policy.
2) Gradual Release: Gradual Release assumes a map-
ping Γfrom variables to levels in a Denning-style lattice.
A release event is generated by a special command x:=
declassify(e). Informally, a program is secure when illegal
flow w.r.t. Γonly occurs along with release events. Hence, we
encode a release event as
EventOn(r); x:= e;output(Γ(x), e); EventOff(r);
where ris a distinguished event for release, and we set
∀x. Γ0(x) = r?LΓ(x)to state that any leakage of any
variable is allowed when this is a release event, but otherwise,
the information flow restriction of Γis obeyed.
Gradual Release is formalized on the insight that “knowl-
edge must remain constant between releases”:
10
∼(
t1,
t2)A(m,
t, b, L),i=k
tk ≡ (
t1,
t2)
Gradual Release b
t1cL4b
t2cLJmKL,
t[i].γ ∩ K(c,
t[:i−1],∼GR )=
Tight Gradual Release b
t1cL4b
t2cLJmKL,
t[i].γ =
According to Policy ∃R. ∀(i, j)∈R. b
t1
[i]cb,L ∼
=b
t2
[j]cb,L JmK6=b=
Cryptographic Erasure b
t1cL=b
t2c[i:j]
LTt∈
tJmKL,t.γ =
Forgetful Attacker ∃
t04
t2.atk(b
t1cL) = atk(b
t0cL)JmKL,
t[i].γ ∩ K(c,
t[:i−1],∼FA )=
Paralock b
t1cA4b
t2cA(JmKA∩ K(c,
t[:i−1],∼PL ),
t[i].∆⊆ΣA
JmK∅,otherwise =
Dynamic Release b
t1cL4b
t2cL(JmK6=b, b is transient
JmK6=b∩ K(c,
t[:i−1],∼), b is persistent b
t1c[n]
b,L =b
t2c[n]
b,L
TABLE I
EXISTING END -TO-END SECURITY POLICIES AND DY NAM IC RE LEA SE WR ITT EN IN T HE FORMALIZATION FRAM EWO RK .
Definition 11 (Gradual Release [7]): A command csatisfies
gradual release w.r.t. Γif7
∀c, m, L, i,
t. hc, mi→
t=⇒
∀i not release event. k(c, m,
t[:i], L, Γ) = k(c, m,
t[:i−1], L, Γ)
where k(c, m,
t, L, Γ) ,
{m0|m0∈JmKL,Γ∧ hc, mi→
t0∧
t4
t0}(6)
While the original definition does not immediately fit our
framework, we prove that they are equivalent by:
∼GR,{(
t1,
t2)| b
t1cL4b
t2cL} ≡GR,=
AGR ,JmKL,
tk~
tk.γ ∩ K(c,
t[:k
tk−1],∼GR )
Recall that in our encoding, a release event emits an output
event hΓ(x), e, γ⊥i, where γ⊥maps all variable to public. This
essentially makes the allowance check K(. . . )⊇ A trivially
true, resembling Definition 11.
Lemma 1: With ∼,∼GR,≡,≡GR and A,AGR, Defini-
tion 10 is equivalent to Definition 11.
Observation. From Table I, it is obvious that Gradual
Release uses indistinguishability ∼L. Its policy allowance is
defined by the last dynamic specification
t[k
tk].γ, as well as
the knowledge gained from previous outputs.
3) Tight Gradual Release: Tight Gradual Release [2], [8] is
an extension of Gradual Release. Similar to Gradual Release,
it assumes a base policy Γand uses a x:= declassify(e)
command to declassify the value of e. However, the encoding
of declassification command is different for two reasons. First,
we can only encode a subset of Tight Gradual Release where
declassification command contains declassify(x), since our
language does not fully support partial release (Section III-B2).
Second, declassification in Tight Gradual Release is both pre-
cise (i.e., only variable xin declassify(x)is downgraded)
and permanent (i.e., the sensitivity of xcannot upgrade after
7Note that hc, mi→
tonly considers terminating program executions by
definition. So we used the termination-insensitive version of Gradual Release.
xis declassified). Hence, we encode x0:= declassify(x)as
EventOn(rx); x0:= x;output(Γ(x0), x);
where rxis a distinguished security event for releasing just x,
and we set Γ0(x) = rx?L←Γ(x)to state that xis declassified
once rxis set.
Tight Gradual Release uses the same knowledge definition
from Gradual Release, but its execution traces also dynami-
cally track the set of declassified variables X:
hc, m, ∅i →∗hc0, m0, Xi
Definition 12 (Tight Gradual Release): A program c is
secure if for any trace
t, initial memory mand attacker at
level L, we have
∀i. 1≤i≤ k
tk.(JmKL,Γ∩JmKXi)⊆k(c, m,
t[:i], L, Γ)
where Xiis the set of declassified variables associated with
the i-th output.
Due to the encoding of declassification commands, we know
that for each output at index iin
twe have:
JmKL,
tk~
tk.γ = (JmKL,Γ∩JmKXi)
Hence, we can rephrase Tight Gradual Release as follows:
∼TGR,{(
t1,
t2)| b
t1cL4b
t2cL}
≡TGR,=ATGR ,JmKL,
tk~
tk.γ
Lemma 2: With ∼,∼TGR,≡,≡TGR and A,ATGR, Defini-
tion 10 is equivalent to Definition 12.
Observation: Tight Gradual Release is more precise than
Gradual Release since the encoding of declassify(x)pre-
cisely downgrades the sensitivity of xbut not any other
variables, while the encoding for Gradual Release downgrades
all variables.
Compared to Dynamic Release, the most important differ-
ence is that the consistency relation ≡is defined in com-
pletely different ways. As discussed in Section IV-B, it is
important to define it properly for general dynamic policies.
The other major difference is that the security semantics of
Tight Gradual Release cannot model erasure policies. Consider
11
the example in Figure 1-B.i with m1(credit card) = 0,
m2(credit card) = 1 and attacker level M. Given a program
execution hc, m1i→
t, we have K(c,
t[:i],∼TGR)={m1}for
all i≥1. However, credit card is upgraded from Mto >
when i= 2 (i.e., the second output), the secure program
(i) is incorrectly rejected since K(c,
t[:2],∼TGR ) = {m1} 6⊇
{m1, m2}=Jm1KM,
t[2.γ .
4) According to Policy: Chong and Myers propose nonin-
terference according to policy [18], [19] to integrate erasure
and declassification policies. We use the formalization in the
more recent paper [19] as the security definition.
This work uses compound labels, a similar security specifi-
cation as ours: a label is is either a simple level drawn from
a Denning-style lattice, or in the form of q1
e
→q2, where q1
and q2are themselves compound labels. Hence, converting the
specification to ours is straightforward.
Noninterference according to policy is defined for each
variable in a two-run style. In particular, it requires that for
any two program executions where the initial memories differ
only in the value of the variable of interest, their traces are
indistinguishable regarding a correspondence R:
Definition 13 (Noninterference According To Policy [19]):
A program cis noninterference according to policy if for any
variable x(with policy b) we have:8
∀m1, m2, ,
t1,
t2.∀y6=x. m1(y) = m2(y)
∧ hc, m1i→
t1∧ hc, m2i→
t2=⇒
∃R. ∀(i, j)∈R, . 6∈ JbKτ1[:i]∧6∈ JbKτ2[:j]⇒τ[i]≈τ0
[j]
where a correspondence Rbetween traces τ1and τ2is a subset
of N×Nsuch that:
1) (Completeness) either {i|(i, j)∈R}={i∈N|i <
|τ1|} or {j|(i, j)∈R}={j∈N|j < |τ2|}, and
2) (Initial configurations) if kRk>0then (0,0) ∈R, and
3) (Monotonicity) for all (i, j)∈Rand (i0, j0)∈R, if i < i0
then j≤j0and symmetrically, if j < j0then i≤i0.
To transform Definition 13 to our framework, we make a
few important observations:
•The definition relates two memories that differ in exactly
one variable (i.e., ∀y6=x. m1(y) = m2(y)), which is
different from the usual low-equivalence requirement in
other definitions. However, it is easy to prove that (shown
shortly) it is equivalent to a per-policy definition JmK6=bin
our framework, that considers memories that differ only
for variables with a particular policy b.
•The component of 6∈ JqK
t1[:i]∧6∈ JqK
t2[:j]filters out
non-interesting outputs, which functions the same as the
filtering function b
tcb,L.
•We define ∼
=on two output sequence as below:
t1∼
=
t2⇐⇒ ¬(k
t1k=k
t2k ∧ ∃i.
t[i]
16=
t[i]
2)
8The original definition uses a specialized label semantics, denoted as
JbKhc,mi, and requires (hci, mii, )6∈ JbKhc,miwhich means that if by the
time hc, mireaches state hci, mii, confidentiality level 0may not observe
the information. It is easy to convert that to 6∈ JbKτ[:i]in our notation.
Based on the observations, we convert Definition 13 into
our framework as follows:
∼AP,{(
t1,
t2)| ∃R. ∀(i, j)∈R. b
t1
[i]cb,L ∼
=b
t2
[j]cb,L}
≡AP,=AAP ,JmK6=b
Lemma 3: With ∼,∼AP,A,AAP, and outside equivalence
≡,≡0
AP, Definition 10 is equivalent to Definition 13.
Observation: Compared with Gradual Release and Tight
Gradual Release, the most interesting component of According
to Policy is in its unique indistinguishability definition, which
uses the correspondent relationship R. Intuitively, According
to Policy relaxes the indistinguishability definition in the
way that two executions are indistinguishable as long as
a correspondence Rexists to allow decreasing knowledge.
However, as shown later in the evaluation, the relaxation with
Rcould be too loose: it falsely accepts insecure programs.
5) Cryptographic Erasure: Cryptographic erasure [5] uses
the same compound labels to describe erasure policy and
knowledge is defined as:
kCE(c, L,
t) = {m| hc, mi
t1
−→ ∗hc1, m1i
t2
−→ ∗hc0, m0i
∧ b
t2cL=b
tcL}
Unlike other policies, the definition specifies knowledge based
on the subtrace relation, rather than the standard prefix re-
lation. The reason is that it has a different attack model: it
assumes an attacker who might not be able to observe program
execution from the beginning.
Definition 14 (Cryptographic Erasure Security [5]): A pro-
gram cis secure if any execution starting with memory m, the
following holds:
∀c0, m0, ci, mi, cn, mn,
t1,
t2, L, i, n.
hc0, m0i
t1
−→ ∗hci, mii
t2
−→ ∗hcn, mni
⇒kCE(c, L,
t2)⊇\
t∈
t2
JmKL,t.γ
To model subtraces, we adjust the ∀1≤i≤ k
tkquantifier in
the framework with ∀1≤i < j ≤ k
tk, and write
t[i:j]for
the subtrace between iand j. Then, converting Definition 14
into our framework is relatively straightforward:
∼CE,{(
t1,
t2)| b
t1cLsubtrace of b
t2cL}
≡CE,=ACE ,\
t∈
t
JmKL,t.γ
Lemma 4: With ∼,∼CE,≡,≡CE and A,ACE, Defini-
tion 10 with adjusted attack model is equivalent to Defini-
tion 14.
Observation: Compare with other works, the most inter-
esting part of cryptographic erasure is that its indistinguisha-
bility and policy allowance are both defined on subtraces;
moreover, the latter uses the weakest policy on the subtrace.
Intuitively, we can interpret Cryptographic Erasure security
as: the subtrace-based knowledge gained from observing a
12
subtrace should be bounded by the smallest allowance (i.e,
the weakest policy) on the trace.
6) Forgetful Attacker: Forgetful Attacker [3], [51] is an
expressive policy where an attacker can “forget” some learned
knowledge. To do so, an attacker is formalized as an automaton
AtkhQA, qinit, δAi, where QAis a set of attacker’s states,
qinit ∈QAis the initial state, and δAis the transition function.
The attacker observes a set of events produced by a program
execution, and updates its state accordingly:
Atk() =qinit
Atk(
t4i) =δ(AtkA(
t4i−1), t[i])
Given a program c, an automaton Atk and attacker’s level
L, knowledge is defined as the set of initial memory that could
have resulted in the same state in the automaton:
kFA(c, L, Atk,
t) = {m| hc, mi
t1
−→ ∗hc0, m0i
t2
−→ ∗m00
∧Atk(b
t1cL) = Atk(b
tcL)}
Definition 15 (Security for Forgetful Attacker [3]): A pro-
gram cis secure against an attacker AtkhQA, qinit, δA)iwith
level Lif:
∀c, c0, m, m0,
t, t0, L. hc, m1i→
t·t0⇒
kFA(c, L, Atk,
t·t0)⊇kFA(c, L, Atk,
t)∩JmKL,γ0
The conversion of Definition 15 to our framework is
straightforward:
∼FA,{(
t1,
t2)| ∃
t04
t2.Atk(
t1) = Atk(
t0)}
≡FA,=AFA ,K(c,
t[:k
tk−1],∼FA )∩JmKL,
t[k~
tk].γ
Lemma 5: With ∼,∼FA,A,AFA, and outside equivalence
≡,≡FA, Definition 10 is equivalent to Definition 15.
Observation: We note that Forgetful Attacker (Defini-
tion 15) was originally formalized in the same format as Dy-
namic Release (the persistent case). However, there are various
differences in the modeling, as can be observed from Table I.
Most importantly, Forgetful Attacker security is parameterized
by an automaton Atk; in other words, a program might be both
“secure” and “insecure” depending on the given automaton.
Consider the program in Figure 1-B(i). The program satisfies
Forgetful Attacker security with any automation that forgets
about the credit card information. Nevertheless, characterizing
such “willfully stupid” attackers is an open question [3]. Sec-
ond, the definition of the consistency relation ≡is completely
different. As discussed in Section IV-B, it is important to
define it properly to allow information flow restrictions to
downgrade and upgrade in arbitrary ways.
VI. EVALUATIO N
In this section, we introduce AnnTrace benchmark and
implement the dynamic policies as the form shown in Table I.
The benchmark and implementations are available on github9.
9https://github.com/psuplus/AnnTrace
lat=Lattice()
lat.add_sub(Label("M"), lat.top)
Program(
secure=True,
source_code="""
// credit_card: M
copy := credit_Card
output(copy, M);
// credit_card: Top
copy := 0;
output(copy, M)""",
persistent=False,
traces=[
Trace(init_memory=dict(cc=0), outputs=[
Out('M', 0, {'cc': 'M'}),
Out('M', 0, {'cc': 'Top'})]),
Trace(init_memory=dict(cc=1), outputs=[
Out('M', 1, {'cc': 'M'}),
Out('M', 0, {'cc': 'Top'})]),
Trace(init_memory=dict(cc=2), outputs=[
Out('M', 2, {'cc': 'M'}),
Out('M', 0, {'cc': 'Top'})])
],
lattice=lat
lat=Lattice()
lat.add_sub(Label("M"), Label(“Top"))
Program(
secure=True,
source_code="""
// credit_card: M
copy := credit_Card
output(copy, M);
// credit_card: Top
copy := 0;
output(copy, M)""",
persistent=False,
traces=[
Trace(init_memory=dict(cc=0), outputs=[
Out('M', 0, {'cc': 'M'}),
Out('M', 0, {'cc': 'Top'})]),
Trace(init_memory=dict(cc=1), outputs=[
Out('M', 1, {'cc': 'M'}),
Out('M', 0, {'cc': 'Top'})]),
Trace(init_memory=dict(cc=2), outputs=[
Out('M', 2, {'cc': 'M'}),
Out('M', 0, {'cc': 'Top'})])
],
lattice=lat )
Fig. 5. Annotated Program for Fig. 1-B(i)
A. AnnTrace Benchmark
To facilite testing and understanding of dynamic policies,
we created the AnnTrace benchmark. It consists of a set
of programs annotated with trace-level security specifica-
tions. Among 58 programs in the benchmark, 35 of them
are collected from existing works [7], [3], [5], [45], [19],
[14]. References to the original examples are annotated in
the benchmark programs. The benchmark also includes 23
programs that we created, such as the programs in Figure 1,
and the counterexamples in Figure 6.
The benchmark is written in Python. Fig. 5shows an
example of annotated program for the source code in Fig. 1-
B(i). As shown in the example, each program consists of:
•secure, a boolean value indicating whether this program
is a secure program; the ground truth of our evaluation.
•source code, written in the syntax shown in Fig 2;
•persistent, a boolean value indicating whether the in-
tended policy in this program is persistent (or transient);
•lattice,L, the security lattice used by the program10;
•traces, executions of the program. Each trace τhas:
–initial memory,m, mapping from variables to integers
–outputs,
t, a list of output events, each tin type Out:
∗output level,, a level from the lattice L
∗output value,v, an integer value
∗policy state,γ, mapping from variables to levels
Given a program in existing work, we (1) use the claimed
security of code as the ground truth, (2) convert the program
into our specification language and to a security lattice, (3)
10We use lattice instead of level set for conciseness in the implementation.
13
Examples in Fig 1Existing(35) New (23)
A(i) A(ii) B(i) B(ii) C(i) C(ii) X×-X×-
Gradual Release X X - - X X 28 2 5 14 1 8
Tight Gradual Release X X - - X X 18 0 17 8 0 15
According to Policy p X X X ×- - 17 6 12 12 4 7
Cryptographic Erasure - - X X - - 21 0 14 7 1 15
Forgetful Attacker-Single X X X ×X X 31 4 0 19 4 0
Dynamic Release X X X X X X 35 0 0 23 0 0
‘X’ means the policy checks the program as intended (same as ground truth); ‘×’ means the policy fails to check
the program as intended. ‘-’ means the program is not in the scope of the policy (not applicable).
TABLE II
EVALUATI ON RES ULTS .
mark persistent (or transient) according to if the correponding
paper presents a persistent (or transient) policy, and (4) man-
ually write down a finite number of traces that are sufficient
for checking the dynamic policy involved in the example.
B. Implementation
We implemented all dynamic policies in Table Iin Python,
according to the formalization presented in the table. With ex-
ception of Forgetful Attacker and Paralocks, all implemented
policies can directly work on the trace annotation provided by
the AnnTrace benchmark. Forgetful Attack policy requires an
automaton as input. So we use a single memory automaton
that only remembers the last output and forgets all previous
outputs. Paralocks security requires “locks” in a test program
but most tests do not have locks. So we are unable to directly
evaluate it on the AnnTrace benchmark.11
Existing policies are not generally applicable to all tests.
Recall that each test has a persistent/transient field. Moreover,
for each test, we automatically generate the following two
features from the traces field:
A. there is no policy upgrading in the trace;
B. there is no policy downgrading in the trace;
These tags are used to determine if a concrete policy is
appliable to the test. For example, Cryptographic Erasure is
a transient policy that only allows upgrading. Hence, it is
applicable to the tests with tag transient and B.
C. Results
The evaluation results are summerized in Table II. For
the examples shown in Figure 1(classical examples for
declassification, erasure and delegation/revocation), we note
that Dynamic Release is the only one that is both applicable
and correct in all cases.
Among the 35 programs collected from prior papers and the
23 new programs, Dynamic Release is still both applicable
and correct to all programs. In contrast, the existing works
fall short in one way or another: with limited applicability
or incorrect judgement on secure/insecure programs. Interest-
ingly, According to Policy, Cryptographic Erasure and Gradual
11Although we are unable to evaluation Paralocks directly, we believe its
results should resemble those of Gradual Release, as its security condition is
a generalisation of the gradual release definition [15].
//x:L
output(L, 0);
//x:H
if (x == 0)
output(L, 0);
// h, h1: {D}⇒a
// l, l2: {}⇒a
open(D);
if (h) { l2:=h1;}
close(D);
l:=0;
(A) (B)
Fig. 6. Counterexamples for Crypto-Erasure and Paralocks.
Release all make wrong judgment on some corner cases. Here,
we discuss a few representative ones.
For According to Policy, the problematic part is the R
relation. The policy states that as long as a qualified Rcan be
found to satisfy the equation, a program is secure. We found
that the restriction on Ris too weak in many cases: a qualified
Rexists for a few insecure programs.
For Crypto-Erasure policy, the failed examples is shown in
Figure 6-(A). It is an insecure program as the attacker learns
that x= 0 if two outputs are observed. However, Crypto-
Erasure accepts this program as secure for the reason that their
policy ignores the location of an output. In this example, for
the output 0, the security definition of Crypto-Erasure assumes
that two executions are indistinguishable to the attacker if there
exists a 0output anywhere in the execution. Therefore, an
execution with a single 0output appears indistinguishably to
the execution with two 0outputs (both exists a 0output). Thus,
the policy fails to reject this program.
For Gradual Release, it fails on the following secure pro-
gram, where h, h1 : Sand l, l2 : P.
if (h)then l2 := declassify(h1);
l:= 0;
This example might seem insecure on the surface, as the
branch condition hwas not part of the declassify expres-
sion. But in the formal semantics (Section V-B2), a release
event declassifies all information in the program (i.e., Gradual
Release does not provide a precise bound on the released
information as pointed out in [7], [2]). The program is secure
since h1is assigned to l2when both hand h1are declassified
by the release event.
To check if a similar issue also exists in Paralocks, whose
security condition is a generalization of the Gradual Release,
14
we created a Paralock version of the same code, as shown
in Figure 6-(B). Thanks to the cleaner syntax of Paralocks,
it is more obvious that the program is secure: hand h1
have the same lock set {D}. Lock Dis opened before the
if statement, allowing value of both hand h1to flow to l, l2.
So the assignment in the if branch is secure. After that, only a
constant 0is assigned to lwhen the lock Dis closed. However,
the Paralock implementation rejects this program as insecure.
To understand why, Paralocks requires the knowledge of an
attacker remains the same if the current lock state is a subset
of the lock set that the attacker have. We are interested in
attacker A1= (a, ∅), who has an empty lock set. When lock
Dis open, since {D} 6⊆ ∅, there is no restriction for the
assignment l2 := h1. However, for the assignment l:= 0, the
current lock set is ∅, which is a subset of A’s lock set (∅). That
is, for all the executions, the attacker A’s knowledge should
not change by observing the output event from assignment
l:= 0. However, this does not hold for the execution starting
with h= 0. The initial knowledge of attacker Aknows nothing
about hor h1since they are protected by lock D. With h= 0,
the assignment in the branch is not executed. The attacker only
observes the output from l:= 0. By observing that output, the
attacker immediately learns that h= 0. Therefore, Paralock
rejected this program as insecure.
VII. REL ATED WORK
The most related works are those present high-level discus-
sions on what/how end-to-end secure confidentiality should
look like for some dynamic security policy. The major ones
are already discussed and compared in the paper.
To precisely describe a dynamic policy, RIF [36], [35] uses
reclassification relation to associate label changes with proram
outputs. While this approach is highly expressive, writing
down the correct relation with regards to numerous possible
outputs is arguably a time-consuming and error-prone task.
Similarly, flow-based declassification [44] uses a graph to pin
down the exact paths leading to a declassification. However,
the policy specification is tied up to the literal implementation
of a program, which might limit its use in practice.
Bastys et al. [11] present six informal design principles for
security definitions and enforcements. They summarize and
categorize existing works to build a road map for the state-of-
art. Then, from the top-down view, they provide guidance on
how to approach a new enforcement or definition. In contrast,
the framework and the benchmark proposed in this paper are
post-checks after one definition is formalized.
Recent work [20] presents a unified framework for express-
ing and understanding for downgrading policies. Similar to
Section IV, the goal of the framework is to make obvious the
meaning of existing work. Based on that, they move further to
sketch safety semantics for enforcement mechanism. However,
they do not provide a define a formalization framework that
allows us to compare various policies at their semantics level.
Many existing work [39], [29], [17] reuses or extends the
representative policies we discussed in this paper. They adopt
the major definition for their specialized interest, which are
irrelevant to our interest. Hunt and Sands [33] present an
interesting insight on erasure, but their label and final security
definition are attached to scopes, which is not directly com-
parable with the end-to-end definitions discussed in this work.
Contextual noninterference [42] and facets [9] use dynamic
labels to keep track of information flows in different branches.
The purpose of those labels is to boost flow- or path-sensitivity,
not intended for dynamic policies.
VIII. CONCLUSION AND FUTURE WORK
We present the first formalization framework that allows
apple-to-apple compassion between various dynamic policies.
The comparison sheds light on new insights on existing
definitions, such as the distinguishing between transient and
persistent policies, as well as motivates Dynamic Release, a
new general dynamic policy proposed in this work. Moreover,
we built a new benchmark for testing and understanding
dynamic policies in general.
For future work, we plan to investigate semantic security
condition of dynamic information flow methods, especially
those use dynamic security labels. Despite the similarity that
security levels are mutable, issues such as label channels
might be challenging to be incorporate in our formalization
framework. Moreover, Dynamic Release offers a semantic
definition for information-flow security, but checking it on real
programs is infeasible unless only small number of traces are
produced. We plan to develop a static type system to check
Dynamic Release in a sound and scalable manner.
Another future direction is to fully support partial release
with expression-level specification. However, doing so is tricky
since the expressions might have conflicting specifications. For
example, consider a specification x, y :Sand x+y, x −y:P.
It states that the values of xand yare secrets, but the values
of x+yand x−yare public. Mathematically, learning the
values of x+yand x−ycan also reveal the concrete values
of xand y. Thus, it becomes tricky to define security in the
presense of expression-level specification.
IX. ACK NOWLEDGEMENT
We would like to thank the anonymous CSF reviewers for
their constructive feedback. This research was supported by
NSF grants CNS-1942851 and CNS-1816282.
REFERENCES
[1] O. Arden, J. Liu, and A. C. Myers, “Flow-limited authorization,” in 2015
IEEE 28th Computer Security Foundations Symposium, July 2015, pp.
569–583.
[2] A. Askarov and A. Sabelfeld, “Tight enforcement of information-release
policies for dynamic languages,” in 2009 22nd IEEE Computer Security
Foundations Symposium, July 2009, pp. 43–59.
[3] A. Askarov and S. Chong, “Learning is change in knowledge:
Knowledge-based security for dynamic policies,” in Proc. IEEE
Symp. on Computer Security Foundations. IEEE, 2012, pp. 308–322.
[4] A. Askarov, S. Hunt, A. Sabelfeld, and D. Sands, “Termination-
insensitive noninterference leaks more than just a bit,” in European
symposium on research in computer security. Springer, 2008, pp. 333–
348.
[5] A. Askarov, S. Moore, C. Dimoulas, and S. Chong, “Cryptographic
enforcement of language-based information erasure,” in 2015 IEEE 28th
Computer Security Foundations Symposium. IEEE, 2015, pp. 334–348.
15
[6] A. Askarov and A. Sabelfeld, “Security-typed languages for implemen-
tation of cryptographic protocols: A case study,” in European Symposium
on Research in Computer Security. Springer, 2005, pp. 197–221.
[7] ——, “Gradual release: Unifying declassification, encryption and key
release policies,” in Proc. IEEE Symp. on Security and Privacy (S&P),
2007, pp. 207–221.
[8] ——, “Localized delimited release: combining the what and where
dimensions of information release,” in Proceedings of the 2007 workshop
on Programming languages and analysis for security, 2007, pp. 53–60.
[9] T. H. Austin and C. Flanagan, “Multiple facets for dynamic information
flow,” in Proceedings of the 39th annual ACM SIGPLAN-SIGACT
symposium on Principles of programming languages, 2012, pp. 165–
178.
[10] A. Banerjee, D. A. Naumann, and S. Rosenberg, “Expressive declassifi-
cation policies and modular static enforcement,” in Proc. IEEE Symp. on
Security and Privacy (S&P). IEEE, 2008, pp. 339–353.
[11] I. Bastys, F. Piessens, and A. Sabelfeld, “Prudent design principles
for information flow control,” in Proceedings of the 13th Workshop on
Programming Languages and Analysis for Security, ser. PLAS ’18. New
York, NY, USA: ACM, 2018, pp. 17–23.
[12] T. Bauereiß, A. P. Gritti, A. Popescu, and F. Raimondi, “Cosmedis: a
distributed social media platform with formally verified confidentiality
guarantees,” in 2017 IEEE Symposium on Security and Privacy (SP).
IEEE, 2017, pp. 729–748.
[13] N. Broberg and D. Sands, “Flow locks: Towards a core calculus for
dynamic flow policies,” in European Symposium on Programming.
Springer, 2006, pp. 180–196.
[14] ——, “Flow-sensitive semantics for dynamic information flow policies,”
in Proceedings of the ACM SIGPLAN Fourth Workshop on Programming
Languages and Analysis for Security. ACM, 2009, pp. 101–112.
[15] ——, “Paralocks: role-based information flow control and beyond,” in
ACM Symposium on Principles of Programming Languages, vol. 45,
no. 1. ACM, 2010, pp. 431–444.
[16] N. Broberg, B. van Delft, and D. Sands, “The anatomy and facets
of dynamic policies,” in Proc. IEEE Symp. on Computer Security
Foundations. IEEE, 2015, pp. 122–136.
[17] P. Buiras and B. van Delft, “Dynamic enforcement of dynamic policies,”
in Proceedings of the 10th ACM Workshop on Programming Languages
and Analysis for Security, ser. PLAS’15. New York, NY, USA: ACM,
2015, pp. 28–41.
[18] S. Chong and A. C. Myers, “Language-based information erasure,” in
Proc. IEEE Computer Security Foundations Workshop. IEEE, 2005,
pp. 241–254.
[19] ——, “End-to-end enforcement of erasure and declassification,” in IEEE
Symp. on Computer Security Foundations, 2008, pp. 98–111.
[20] A. Chudnov and D. A. Naumann, “Assuming you know: Epistemic
semantics of relational annotations for expressive flow policies,” in 2018
IEEE 31st Computer Security Foundations Symposium (CSF), July 2018,
pp. 189–203.
[21] M. R. Clarkson, S. Chong, and A. C. Myers, “Civitas: Toward a secure
voting system,” in Proc. IEEE Symp. on Security and Privacy (S&P),
2008, pp. 354–368.
[22] E. S. Cohen, “Information transmission in sequential programs,” Foun-
dations of Secure Computation, pp. 297–335, 1978.
[23] F. Del Tedesco, S. Hunt, and D. Sands, “A semantic hierarchy for erasure
policies,” in International Conference on Information Systems Security.
Springer, 2011, pp. 352–369.
[24] D. E. Denning, “A lattice model of secure information flow,” Comm. of
the ACM, vol. 19, no. 5, pp. 236–243, 1976.
[25] D. Ferraiolo, J. Cugini, and D. R. Kuhn, “Role-based access control
(rbac): Features and motivations,” in Proceedings of 11th annual com-
puter security application conference, 1995, pp. 241–48.
[26] R. Giacobazzi and I. Mastroeni, “Abstract non-interference: Parame-
terizing non-interference by abstract interpretation,” in ACM SIGPLAN
Notices, vol. 39, no. 1, 2004, pp. 186–197.
[27] ——, “Adjoining declassification and attack models by abstract inter-
pretation,” in European Symposium on Programming. Springer, 2005,
pp. 295–310.
[28] J. A. Goguen and J. Meseguer, “Security policies and security models,”
in IEEE Symp. on Security and Privacy (S&P), Apr. 1982, pp. 11–20.
[29] A. Gollamudi and S. Chong, “Automatic enforcement of expressive
security policies using enclaves,” in Proceedings of the 2016 ACM
SIGPLAN International Conference on Object-Oriented Programming,
Systems, Languages, and Applications, ser. OOPSLA 2016, vol. 51,
no. 10. ACM, 2016, pp. 494–513.
[30] R. R. Hansen and C. W. Probst, “Non-interference and erasure policies
for java card bytecode,” in 6th International Workshop on Issues in the
Theory of Security (WITS’06), 2006.
[31] B. Hicks, K. Ahmadizadeh, and P. McDaniel, “From languages to
systems: Understanding practical application development in security-
typed languages,” in 2006 22nd Annual Computer Security Applications
Conference (ACSAC’06). IEEE, 2006, pp. 153–164.
[32] M. Hicks, S. Tse, B. Hicks, and S. Zdancewic, “Dynamic updating of
information-flow policies,” in Proc. of Foundations of Computer Security
Workshop, 2005, pp. 7–18.
[33] S. Hunt and D. Sands, “Just forget it–the semantics and enforcement
of information erasure,” in European Symposium on Programming.
Springer, 2008, pp. 239–253.
[34] S. Kanav, P. Lammich, and A. Popescu, “A conference management sys-
tem with verified document confidentiality,” in International Conference
on Computer Aided Verification. Springer, 2014, pp. 167–183.
[35] E. Kozyri, O. Arden, A. C. Myers, and F. B. Schneider, “Jrif: reactive
information flow control for java,” in Foundations of Security, Protocols,
and Equational Reasoning. Springer, 2019, pp. 70–88.
[36] E. Kozyri and F. B. Schneider, “Rif: Reactive information flow labels,”
Journal of Computer Security, no. Preprint, pp. 1–38, 2020.
[37] P. Li and S. Zdancewic, “Downgrading policies and relaxed noninter-
ference,” in ACM SIGPLAN Notices, vol. 40, no. 1. ACM, 2005, pp.
158–170.
[38] A. A. Matos and G. Boudol, “On declassification and the non-disclosure
policy,” in Proc. IEEE Computer Security Foundations Workshop
(CSFW). IEEE, 2005, pp. 226–240.
[39] M. McCall, H. Zhang, and L. Jia, “Knowledge-based security of
dynamic secrets for reactive programs,” in 2018 IEEE 31st Computer
Security Foundations Symposium (CSF), July 2018, pp. 175–188.
[40] A. C. Myers and B. Liskov, “A decentralized model for information flow
control,” in Symp. on Operating Systems Principles (SOSP), 1997, pp.
129–142.
[41] ——, “Protecting privacy using the decentralized label model,” ACM
Transactions on Software Engineering and Methodology (TOSEM),
vol. 9, no. 4, pp. 410–442, 2000.
[42] N. Polikarpova, J. Yang, S. Itzhaky, T. Hance, and A. Solar-Lezama,
“Enforcing information flow policies with type-targeted program syn-
thesis,” arXiv preprint arXiv:1607.03445, 2016.
[43] S. Preibusch, “Information flow control for static enforcement of user-
defined privacy policies,” in 2011 IEEE International Symposium on
Policies for Distributed Systems and Networks. IEEE, 2011, pp. 133–
136.
[44] B. P. Rocha, S. Bandhakavi, J. den Hartog, W. H. Winsborough, and
S. Etalle, “Towards static flow-based declassification for legacy and
untrusted programs,” in 2010 IEEE Symposium on Security and Privacy.
IEEE, 2010, pp. 93–108.
[45] A. Sabelfeld and A. C. Myers, “A model for delimited information
release,” in International Symposium on Software Security. Springer,
2003, pp. 174–191.
[46] A. Sabelfeld and D. Sands, “A per model of secure information flow in
sequential programs,” Higher-order and symbolic computation, vol. 14,
no. 1, pp. 59–91, 2001.
[47] ——, “Dimensions and principles of declassification,” in IEEE Com-
puter Security Foundations Workshop. IEEE, 2005, pp. 255–269.
[48] ——, “Declassification: Dimensions and principles,” Journal of Com-
puter Security, vol. 17, no. 5, pp. 517–548, 2009.
[49] A. Stoughton, A. Johnson, S. Beller, K. Chadha, D. Chen, K. Foner,
and M. Zhivich, “You sank my battleship,” A case study in secure
programming, 2014.
[50] N. Swamy, M. Hicks, S. Tse, and S. Zdancewic, “Managing policy
updates in security-typed languages,” in Proc. IEEE Computer Security
Foundations Workshop (CSFW). IEEE, 2006.
[51] B. van Delft, S. Hunt, and D. Sands, “Very Static Enforcement of
Dynamic Policies,” in Principles of Security and Trust, R. Focardi and
A. Myers, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015,
pp. 32–52.
16
APPENDIX
A. Paralock in Table I
Paralock [13], [14], [15] uses locks to formalize the sen-
sitivity of security objects. Paralock uses fine-grained model
to encode role-based access control systems. Its covers both
declassification and revocation of information to a principal in
the system. As described in Section III-B3, security specifi-
cation is written as {Σ⇒a;...}, where Σis a lock set and
ais an actor. An actor ais the base sensitivity entity of the
model, which is used to model a lattice level Lin two-point
lattice {H, L}in [14], and a principal pin role-base access
control system in [15].
To formalize Paralock security, an attacker A= (a, Σ) is
modeled as an actor awith a (static) set of open locks Σ.
To simplify notation, we use Γ(x, a) = Σ to denote the fact
that {Σ⇒a;}is part of the security policy of x, otherwise
Γ(x, a) = >. With respect to an attacker A= (a, Σ), a
variable xis observable to Aiff Γ(x, a)⊆Σ, meaning that
the attacker possesses more opened locks than what’s required
in the policy.
To simplified the notations in this work, we extend the
output event tto also record the current open locks. So, for
a trace fragment hc, mib,v,γ
−−−→ hc0, m0i, it generates the output
event t=hb, v, γ, ∆i, where ∆ = unlock(hc, mi).
Let kAkbe the set of variables that are visible to A, and
b
tcAbe the outputs that are visible to A= (aA,ΣA):
kAk,{x| ∀x∈Vars.Γ(x, aA)⊆ΣA}
b
tcA,b
tcλb,n,γ,∆.Γ(b,aA)⊆ΣA
Paralock security defines attacker’s knowledge12 as follows:
kPL(c, m,
t, A) = {m0|m0≈kAkm
∧ hc, m0i
t1
−→ ∗hc0, m00i
t2
−→ ∗m000 ∧ b
t1cA=b
tcA}
Paralock security semantics extends that of gradual release,
by treating “unlock” events as releasing events:
Definition 16 (Paralock Security): A program cis Paralock
secure if for any attacker A=ha, Σi, the attacker’s knowledge
remains unchanged whenever unlock(τ[i])⊆ΣA:
∀c, m, m0,
t, t0, a, ΣA, A, i.
hc, mi
t
−→ hc0, m0it
−→ hc00, m00i ∧ A=ha, ΣAi ∧
unlock(hc00, m00 i)⊆ΣA
⇒kPL(c, m,
t·t0, A) = kPL(c, m,
t, A)
We use the memory closure on Afor memory that looks the
12We revised the definition for termination insensitivity. We note that
Paralock also presents a different termination insensitive policy following
ideas from[4]. However, here we follow Gradual Release to define terminated
insensitive knowledge by taking a intersection with the initial memory of
traces that terminates.
same to attacker A:
JmKA,{m0| ∀m0.∀x∈Vars.
Γ(x, aA)⊆ΣA⇒m(x) = m0(x)}
The conversion of Definition 16 to our framework is straight-
forward.
∼PL,{(
t1,
t2)| b
t1cAprefix of b
t2cA} ≡PL,=
APL ,(K(c,
t[:i−1],∼PL )∩JmKA,
t[k
tk].∆⊆ΣA
JmK∅,otherwise
Lemma 6: With ∼,∼PL and A,APL, Definition 10 is
equivalent to Definition 16.
B. Equivalence Proof for Table I
We first introduce a useful lemma which allows us to rewrite
cast the orginal knowledge defintion in [7] to the knowledge
defitnion Kin this paper.
Lemma 7: Let kbe defined as in Equation 6and Kbe
defined as in Equation 5, then we have
∀c, m, L, Γ,
t, M.
M⊆JmKL,Γ∧k(c, m,
t, L, Γ) ⊆M=⇒
k(c, m,
t, L, Γ) = M⇐⇒ K(c,
t, ∼NI)⊇M
Proof. By definition, we know
k(c, m,
t, L, Γ) = K(c,
t, ∼NI)∩JmKL,Γ
•case =⇒: we know
M=k(c, m,
t, L, Γ) = K(c,
t, ∼NI)∩JmKL,Γ
K(c,
t, ∼NI)⊇ K(c,
t, ∼NI)∩JmKL,Γ
Thus, we have K(c,
t, ∼NI)⊇M.
•case ⇐=: we know
k(c, m,
t, Γ) = K(c,
t, ∼NI)∩JmKL,Γ
M⊆ K(c,
t, ∼NI)
M⊆JmKL,Γ
M∩M⊆ K(c,
t, ∼NI)∩JmKL,Γ
Thus, we know k(c, m,
t, L, Γ) ⊇M. From assumption
k(c, m,
t, Γ) ⊆M, we know k(c, m,
t, L, Γ) = M.
So, we have k(c, m,
t, L, Γ) = M⇐⇒ K(c,
t, ∼NI)⊇M.2
1) Gradual Release: Lemma 1.With ∼,∼GR and A,
AGR, Definition 10 is equivalent to Definition 11:
∀c, m, L, i,
t. hc, mi→
t=⇒
i not release event ⇒k(c, m,
t[:i], L, Γ) = k(c, m,
t[:i−1], L, Γ)
⇐⇒ K(c,
t[:i],∼GR)⊇JmKL,
t[i].γ ∩ K(c,
t[:i−1],∼GR )
Proof. From the encoding of Gradual Release, we know:
t[i].γ =(γ⊥,i is a release event
Γ,i not a release event
17
•case when iis a release event:
t[i].γ =γ⊥. From the
definition, we know JmKL, γ⊥returns the singleton set
{m}.
From hc, mi→
tand the definition of K, we know
∀j. m ∈ K(c,
t[:j],∼GR):
m∈ K(c,
t[:i],∼GR)
m∈ K(c,
t[:i−1],∼GR )
{m}=JmKL,
t[i].γ
Thus, both Definition 10 and 11 are trivially true.
•case when iis not a release event:
t[i].γ = Γ. From
the definitions, we know ∼GR=∼NI. We know from the
monotonicity of the knowledge that:
k(c, m,
t[:i−1], L, Γ) ⊆JmKL,Γ
k(c, m,
t[:i], L, Γ) ⊆k(c, m,
t[:i−1], L, Γ)
So, we can instantiate Lemma 7with:
M:= k(c, m,
t[:i−1], L, Γ),
t:=
t[:i]
and we get:
k(c, m,
t[:i, L, Γ) = k(c, m,
t[:i−1], L, Γ)
⇐⇒ K(c,
t[:i],∼GR)⊇k(c, m,
t[:i−1], L, Γ)
By definition, we know
k(c, m,
t[:i−1], L, Γ) = JmKL,Γ∩ K(c,
t[:i−1],∼GR )
Thus, when iis not a release event, we have:
k(c, m,
t[:i], L, Γ) = k(c, m,
t[:i−1], L, Γ)
⇐⇒ K(c,
t[:i],∼GR)⊇JmKL,
t[i].γ ∩K(c,
t[:i−1],∼GR )
Therefore, Definition 10 is equivalent to Definition 11.2
2) Tight Gradual Release: Lemma 2.With ∼,∼TGR,
≡,≡TGR and A,ATGR, Definition 10 is equivalent to
Definition 12.
∀i. 1≤i≤ k
tk.
(JmKL,Γ∩JmKEi)⊆k(c, m,
t[:i], L, Γ)
⇐⇒
K(c,
t0[:i],∼TGR)⊇JmKL,
t[i].γ
Proof. The encoding limited Eito a variable set Xi, thus, we
assumes Ei=Xi. From the definition, we know that
k(c, m,
t, L, Γ) = K(c,
t0[:i],∼TGR)∩JmKL,Γ(7)
From encoding, we know that
JmKL,
t[i].γ = (JmKL,Γ∩JmKEi)(8)
•case =⇒: From Equation 8and the assumption, we know
JmKL,
t[i].γ ⊆k(c, m,
t[:i], L, Γ)
From Equation 7, we know
k(c, m,
t[:i], L, Γ) ⊆ K(c,
t0[:i],∼TGR)
Therefore, we have JmKL,
t[i].γ ⊆ K(c,
t0[:i],∼TGR).
•case ⇐=: By taking an intersection with JmKL,Γon both
side of the assumption, we have:
K(c,
t0[:i],∼TGR)∩JmKL,Γ⊇JmKL,
t[i].γ ∩JmKL,Γ
Apply Equation 7to the left and Equation 8to the right:
k(c, m,
t[:i], L, Γ) ⊇(JmKL,Γ∩JmKEi)∩JmKL,Γ
=JmKL,Γ∩JmKEi
Thus, we have k(c, m,
t[:i], L, Γ) ⊇(JmKL,Γ∩JmKEi).
Therefore, Definition 10 is equivalent to Definition 12.2
3) According to Policy p: Lemma 3.With ∼,∼AP,A,
AAP, and outside equivalence ≡,≡0
AP, Definition 10 is equiv-
alent to Definition 13.
Proof. First, we convert a security levels Lfrom the Den-
ning’s style to our attacker levels las described in Section III,
and outputs any intermediate memory of the trace to its
variable’s level. That is,
∀c, m0, m0, ci, mi, i, e, Γ.
τ=hc, m0iγ0→∗hci, miiγi→∗m0∧τ[i]=hci, miiγi
⇐⇒
hc, m0i→
t
∧
t[i]={hch, n, γi | ch =e∧n=mi(e)∧γ=γi}
We note that our normal
t[i]returns a single output event, say
some t=hch, n, γi. But here we overload
t[i]to return a set
of output events that output all values on memory τ[i]. Thus,
with all values on memory outputted, we have:
∀c, m1, m2, m0
1, m0
2, τ, τ 0,
t1,
t2.
∧τ=hc, m1i −→ m0
1∧τ0=hc, m2i −→ m0
2
∧ hc, m1i→
t1∧ hc, m2i→
t2=⇒
τ[i]≈lτ0
[j]⇐⇒ b
t[i]cl=b
t[j]cl.
Thus, we rewrite Definition 13 in following two-run style:
∀c, m1, m2, l, p,
t1,
t2.
m2∈Jm1Kp∧ hc, m1i→
t1∧ hc, m2i→
t2=⇒
∃R.∀(i, j)∈R.
t1[i]∈ b
t1cp,l ∧
t2[j]∈ b
t2cp,l
⇒ b
t1cl=b
t2cl
We combine the two filters and assume R0as Rafter filtering:
∀c, m1, m2, l, p,
t1,
t2.
m2∈Jm1Kp∧ hc, m1i→
t1∧ hc, m2i→
t2=⇒
∃R0.∀(i, j)∈R0.(b
t1cp,l)[i]= (b
t2cp,l)[j]
18
With K(c,
t, ∼AP)unfolded as below:
K(c,
t, ∼AP) = {m2| ∀m2,
t2.hc, m2i→
t2
∧ ∃R0.∀(i, j)∈R0.(b
tcp,l)[i]= (b
t2cp,l)[j]}
We can further rewrite the definition as follow:
∀c, m1, m2, l, p,
t1.
m2∈Jm1Kp∧ hc, m1i→
t1=⇒m2∈ K(c,
t1,∼AP)
That is,
∀c, m1, l, p,
t1.hc, m1i→
t1∧Jm1Kp⊆ K(c,
t1,∼AP)
We note that only the equivalence relation in ∼AP is ≡AP. The
equivalence relation in
t0≡
tin Definition 10 in this case is
not ≡AP, but ≡0
AP,{(
t1,
t2)|
t1=
t2}.2
4) Cryptographic Erasure: Lemma 4.With ∼,∼CE,
≡,≡CE and A,ACE, Definition 10 with adjusted attack
model is equivalent to Definition 14.
∀c, m, γ0, ci, mi, γi, cn, mn, γn, m0,
t1,
t2, l, i, j, n.
hc, miγ0
t1
−→ hci, miiγi
t2
−→ hcn, mniγn→∗m0=⇒
kCE(c, L, ∼CE )⊇\
i≤j≤n
JmKL,γj⇐⇒
K(c,
t2,∼CE)⊇\
tn∈
t2
JmKL,tn.γ
Proof. We note that in Definition 14,γjare state policies
attached to the configurations, not from the output event
t.
According to the definition,
tdoes not contain empty events. In
Definition 14, it takes n−jsteps to generate output sequence
t2, we know n−j≥ k
t2k. We first show that the right hand
side allowance defined using γjis the same as using state
policy from the output sequence
t:
kCE(c, L,
t2)⊇\
i≤j≤n
JmKL,γj
⇐⇒ kCE(c, L,
t2)⊇\
tn∈
t2
JmKL,tn.γ (9)
•case =⇒: Crypto [5] supports only erasure policy (and
static policy). That is, the sensitivity of any security entity
is monotonically increasing:
∀j∈[i, n].γj4γj+1
From the definition of memory closure, we know:
∀γ1, γ2. γ14γ2=⇒JmKL,γ1⊆JmKL,γ2
Thus, we know:
\
i≤j≤n
JmKL,γj=JmKL,γi
\
tn∈
t2
JmKL,tn.γ =JmKL,
t2[0].γ
From the definitions, we know
t2[0].γ =γiif hci, miiγi
does not immediately generates an empty output event.
Otherwise, if the first non-empty event is generated at
configuration hci0, mi0iγi0,(i<i0< n), we know:
JmKL,γi⊆JmKL,γi0=JmKL,
t2[0].γ
We can instantiate Definition 14 for i:= i0, and we get:
kCE(c, L,
t2)⊇\
i0≤j≤n
JmKL,γj=JmKL,γi0=JmKL,
t2[0].γ
Thus, we have kCE(c, L,
t2)⊇Ttn∈
t2JmKL,tn.γ .
•case ⇐=: from n−j≥ k
t2k, we know:
∀tn∈
t2.∃j0∈[i, n]. γj0=tn.γ
{tn.γ |tn∈
t2}⊆{γj|i≤j≤n}
\
tn∈
t2
JmKL,tn.γ ⊇\
i≤j≤n
JmKL,γj
Thus, we have kCE(c, L,
t2)⊇Ti≤j≤nJmKL,γj.
Therefore, we know Equation 9is true.
Now we convert a security level Lfrom Denning’s style to
our attacker level las described in SectionIII. Let JcK,
{m| ∃
t. hc, mi→
t}denote the set of memory that
terminates. From definition we know:
K(c,
t, ∼CE) = kCE (c, L,
t)∩JcK
For the interest of a termination-insensitive policy, we can
ignore the difference made by the terminated set JcK. Thus,
we assumes K(c,
t, ∼CE) = kCE (c, L,
t).2
5) Forgetful Attacker: Lemma 5.With ∼,∼FA and A,
AFA, Definition 10 is equivalent to Definition 15.
Proof. In the forgetful attacker[3], the sensitivity level is
changed by setPolicy command. Recall from our encoding,
setPolicy is encoded using security commands and gener-
ates a security event, but no output event. So, there is no
sensitivity change between the two states that generates an
output. That is, for the output event t0in the trace:
hc, mi
t·t0
−−→ ∗hc0, m0ihb,v,γ0i·...
−−−−−−→ ∗⇒
We know t0.γ =γ0and therefore, we have
JmKL,γ0=JmKL, t0.γ
Definition 15 is rephrased as:
∀c, m, L, i,
t.hc, mi→
t=⇒
kFA(c, L, Atk,
t[:i])⊆kFA(c, L, Atk,
t[:i−1])∩JmKL,
t[i].γ
By definition, we know:
kFA(c, L, Atk,
t) = K(c,
t, ∼FA)
Thus, we know Definition 15 is equivalent to Definition 10.2
19
6) Paralock: Lemma 6.With ∼,∼PL and A,APL,
Definition 10 is equivalent to Definition 16.
∀c, m,
t,
t0, i, A. hc, mi→
t∧
t0=
t[:i]=⇒
t[i].∆⊆ΣA⇒
kPL(c, m,
t[:i], A) = kPL(c, m,
t[:i−1], A)
⇐⇒
K(c,
t[:i],∼PL)⊇ K(c,
t[:i−1],∼PL )∩JmKA
Proof. We omit the case when
t[i].∆6⊆ ΣAsince both
definitions are trivially true. By Definition, we know:
∀j. kPL(c, m,
t[:j], A) = K(c,
t[:j],∼PL)∩JmKA
•case =⇒: we know:
kPL(c, m,
t[:i−1], A) = kPL (c, m,
t[:i], A)
kPL(c, m,
t[:i], A) = K(c,
t[:i],∼PL)∩JmKA
K(c,
t[:i],∼PL)⊇ K(c,
t[:i],∼PL)∩JmKA
Thus, we have
K(c,
t[:i],∼PL)⊇kPL (c, m,
t[:i−1], A)
With kPL(c, m,
t[:i−1], A) = K(c,
t[:i−1],∼PL )∩JmKA, we
get K(c,
t[:i],∼PL)⊇ K(c,
t[:i−1],∼PL )∩JmKA.
•case ⇐=: we know:
K(c,
t[:i],∼PL)⊇ K(c,
t[:i−1],∼PL )∩JmKA
kPL(c, m,
t[:i−1], A) = K(c,
t[:i−1],∼PL )∩JmKA
Thus, we have
K(c,
t[:i],∼PL)⊇kPL (c, m,
t[:i−1], A)(10)
We know JmKAis the initial knowledge of Abefore
observing any output event. From the monotonicity of
Paralock knowledge, we know:
JmKA⊇kPL(c, m,
t[:i−1], A)(11)
By taking an intersection on both side of Equation (10)
and Equation (11), we have:
(K(c,
t[:i],∼PL)∩JmKA)
⊇(kPL(c, m,
t[:i−1], A)∩kPL (c, m,
t[:i−1], A))
=kPL(c, m,
t[:i−1], A)
Thus, we have
kPL(c, m,
t[:i−1], A)⊆ K(c,
t[:i],∼PL)∩JmKA
By Definitions, we have:
kPL(c, m,
t[:i], A) = K(c,
t[:i],∼PL)∩JmKA
Thus, we know:
kPL(c, m,
t[:i−1], A)⊆kPL (c, m,
t[:i], A)
From the monotonicity of the Paralock knowledge, we
know
kPL(c, m,
t[:i−1], A)⊇kPL (c, m,
t[:i], A)
Thus, we have:
kPL(c, m,
t[:i−1], A) = kPL (c, m,
t[:i], A)
Therefore, Definition 10 is equivalent to Definition 16.2
20