Available via license: CC BY
Content may be subject to copyright.
This is a repository copy of Bounded-deducibility security.
White Rose Research Online URL for this paper:
https://eprints.whiterose.ac.uk/191501/
Version: Published Version
Proceedings Paper:
Popescu, A., Bauereiss, T. and Lammich, P. (2021) Bounded-deducibility security. In:
Cohen, L. and Kaliszyk, C., (eds.) 12th International Conference on Interactive Theorem
Proving (ITP 2021). 12th International Conference on Interactive Theorem Proving (ITP
2021), 29 Jun - 01 Jul 2021, Online. Leibniz International Proceedings in Informatics
(LIPIcs), 193 . Schloss Dagstuhl -- Leibniz-Zentrum fuer Informatik , 3:1-3:20. ISBN
9783959771887
https://doi.org/10.4230/LIPIcs.ITP.2021.3
eprints@whiterose.ac.uk
https://eprints.whiterose.ac.uk/
Reuse
This article is distributed under the terms of the Creative Commons Attribution (CC BY) licence. This licence
allows you to distribute, remix, tweak, and build upon the work, even commercially, as long as you credit the
authors for the original work. More information and the full terms of the licence here:
https://creativecommons.org/licenses/
Takedown
If you consider content in White Rose Research Online to be in breach of UK law, please notify us by
emailing eprints@whiterose.ac.uk including the URL of the record and the reason for the withdrawal request.
Bounded-Deducibility Security
Andrei Popescu !↸
Department of Computer Science, University of Sheffield, UK
Thomas Bauereiss !↸
Department of Computer Science and Technology, University of Cambridge, UK
Peter Lammich !↸
Department of Computer Science, University of Twente, The Netherlands
Abstract
We describe Bounded-Deducibility (BD) security, an expressive framework for the speciĄcation and
veriĄcation of information-Ćow security. The framework grew by confronting concrete challenges of
specifying and verifying Ąne-grained conĄdentiality properties in some realistic web-based systems.
The concepts and theorems that constitute this framework have an eventful history of such Şcon-
frontationsŤ, often involving trial and error, which are reported in previous papers. This paper is
the Ąrst to focus on the framework itself rather than the case studies, gathering in one place all the
abstract results about BD security.
2012 ACM Subject Classification Security and privacy
→
Formal security models; Security and
privacy →Logic and veriĄcation; Security and privacy →Security requirements
Keywords and phrases Information-Ćow security, Unwinding proof method, Compositionality,
VeriĄcation
Digital Object Identifier 10.4230/LIPIcs.ITP.2021.3
Category Invited Paper
Funding The work presented here has been supported by: EPSRC through the grant ŞVeriĄcation of
Web-based Systems (VOWS)Ť (EP/N019547/1); DFG through the grants ŞSecurity Type Systems
and DeductionŤ (Ni 491/13-2) and ŞMORES Ű Modelling and ReĄnement of Security Requirements
on Data and Processes (Hu 737/5-2)Ť, part of ŞRS
3
Ű Reliably Secure Software SystemsŤ (SPP
1496); VeTSS through the grant ŞFormal VeriĄcation of Information Flow Security for Relational
DatabasesŤ; Innovate UK through the Knowledge Transfer Partnership 010041 between Caritas
Anchor House and Middlesex University: ŞThe Global Noticeboard (GNB): a veriĄed social media
platform with a charitable, humanitarian purposeŤ.
Acknowledgements We are fortunate to have collaborated with some excellent researchers and
developers on various parts of the implementation and veriĄcation work based on BD security:
Sergey Grebenshchikov, Ping Hou, Sudeep Kanav and Ondřej Kunčar have contributed to CoCon,
while Armando Pesenti Gritti and Franco Raimondi have contributed to CoSMed and CoSMeDis.
We thank this paperŠs reviewers for their helpful comments and suggestions.
1 Introduction
Bounded-Deducibility (BD) security is a framework we have developed recently for the
specification and verification of information-flow security. It is applicable widely, to systems
described as nondeterminisic I/O automata, and caters for the fine-grained specification of
restrictions on their flows of information. We formalized the framework in the proof assistant
Isabelle/HOL [31, 32] and used it in the verification of confidentiality prop erties of some web
applications.
© Andrei Popescu, Thomas Bauereiss, and Peter Lammich;
licensed under Creative Commons License CC-BY 4.0
12th International Conference on Interactive Theorem Proving (ITP 2021).
Editors: Liron Cohen and Cezary Kaliszyk; Article No.3; pp. 3:1–3:20
Leibniz International Proceedings in Informatics
Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Dagstuhl Publishing, Germany
3:2 Bounded-Deducibility Security
Information-flow security has a rich history, with many formal definitions having been
proposed, differing in how systems, attackers, and flow policies are modeled [18, 26, 29, 30, 33,
34, 42
–
44, 46, 47]. Nevertheless, a new notion seemed necessary because the existing notions
(Section 4) were not expressive enough for our case studies: multi-user web-based systems
with flows of information requiring fine-grained control. For example, about a multi-user
multi-conference management system, we wanted to prove a property such as the following,
which refers to the series of uploads of a document’s versions in a system: “A group of users
learn nothing about a paper beyond the absence of any upload unless one of them becomes an
author of that paper or a PC member at the paper’s conference.” (Importantly, the property
is about not only what can be directly accessed, but also what can be learned by interacting
with the system – this distinguishes information-flow control from mere access control.)
Every abstract definition and theorem in the BD security framework was inspired from, and
refined based on, the needs of concrete interactive systems. This ended up contributing to
the area of information-flow security an increased level of precision in specification and proof,
of the kind that we believe can make a difference in practical system verification.
In previous papers [7
–
9, 23, 37], BD security has only been discussed in the context of
verifying these concrete systems. This helps with intuition and motivation, but makes it
easy to miss the forest from the trees, i.e., miss the abstract level of the development. The
current paper is the first to collect in one place all our abstract results, and to present
them independently of any case studies (Section 2). They include the BD unwinding proof
method (Section 2.5), as well as theorems on proof (Section 2.6) and system compositionality
(Section 2.7). We hope that this paper will better demonstrate the scope of the framework
and help identify potential new applications. The framework is open-ended and open-
source [10, 35], and new contributions are welcome.
Three major verification case studies will also be briefly described while recalling their
contribution to the framework’s design (Section 3). These are the CoCon conference manage-
ment system (Section 3.1, [23,37]), the CoSMed social media platform (Section 3.2, [7, 9]),
and the CoSMeDis distributed extension of CoSMed (Section 3.3, [8]).
Notations
We write function application by juxtaposition, without placing the argument in parentheses,
as in
f a
, unless required for disambiguation, e.g.,
f
(
g a
). Multiple-argument functions will
usually be considered in curried form – e.g., we think of
f
:
A→B→C
as a two-argument
function, and
f a b
denotes its application to
a
and
b
. We write “
◦
” for function composition.
Bool
denotes the two-element set of Booleans,
¶true,false♢
. Predicates and relations will be
modeled as functions to
Bool
. For example,
P
:
A→Bool
is a (unary) predicate on
A
and
Q
:
A→A→Bool
is a binary relation on
A
. Given
a∈A
, we write “
P a
holds”, or simply
“P a”, to mean that P a =true; and similarly for binary relations.
Given a set
A
, we write
Set
(
A
) for the powerset (i.e., set of all subsets) of
A
, and
List
(
A
) for
the set of lists with elements in
A
. We write [
a1, . . . , an
] for the list consisting of the indicated
elements; in particular, [] is the empty list and [
a
] is a singleton list. As a general convention,
if
a,b
denote elements in
A
, then
al,bl
will denote elements in
List
(
A
). An exception will
be the system traces – even though they are lists of transitions
t
, for them we will use the
customized notation
tr
. We write “
·
” for list concatenation. Applied to a non-empty list
[
a1, . . . , an
], the function
head
returns its first element
a1
. Given a function
f
:
A→B
and
[
a1, . . . , an
]
∈List
(
A
),
map f
[
a1, . . . , an
] returns [
f a1, . . . , f an
]. Given a partial function
f
:
A⇀B
and [
a1, . . . , an
]
∈List
(
A
), let [
ai1,ai2, . . . , aik
] be the sublist of [
a1, . . . , an
] that keeps
only elements on which
f
is defined (where 1
≤i1<i2<· · · <ik≤n
); then
map f
[
a1, . . . , an
]
A. Popescu, T. Bauereiss, and P. Lammich 3:3
returns [
f ai1, . . . , f aik
]. In other words, partial functions are mapped while omitting the
elements on which they are not defined. Given a predicate
P
,
filter P
[
a1, . . . , an
] returns the
sublist of [a1, . . . , an] that keeps only the elements satisfying P.
2 Specification and Reasoning Framework
Our framework is developed around a simple and general notion of system: nondeterministic
I/O automata. It also provides a notion of policy to describe the (dis)allowed flows of
information in these systems. A policy has several parameters that regulate the tension
between observations (what can be seen) and secrets (what needs to be protected). The
judicious use of these parameters allows fine-tuning not only what, but also how much needs
to be protected, and when, or even for how long. The framework offers methods to prove
that the policies are satisfied by systems, and to manage proof and system complexity via
compositionality results.
2.1 System model
The systems whose information-flow security properties will be studied are nondeterministic
I/O automata. Namely, we call system a tuple A= (State,Act,Out,istate,Trans), where:
State, ranged over by σ, σ′etc., is the set of states;
Act, ranged over by a,betc., is the set of actions;
Out, ranged over by ou,ou′etc., is the set of outputs;
istate ∈State is the initial state;
Trans ⊆State ×Act ×Out ×State is the set of transitions.
(Note that we call “action” what is usually called “input” for I/O automata.) A transition
t= (
σ, a,ou, σ′
)
∈Trans
has the following interpretation: If action
a
is taken while the
system is in state
σ
, the system may respond by producing output
ou
and changing the
state to
σ′
. We call
σ
the source,
a
the action,
ou
the output, and
σ′
the target of t. The
transition’s action
a
is also denoted by
actOf
t. We will write
σ
t
=⇒σ′
to express that
t∈Trans,σis the source of tand σ′is the target of t.
Atrace is any non-empty list of transitions [t
1, . . . ,
t
n
] such that the source of t
1
is
istate
and, for all
i∈ ¶
2
, . . . , n♢
, the source of t
i
is the target of t
i−1
. We let
Trace
, ranged over by
tr
, be the set of traces. A trace fragment has the form [t
i, . . . ,
t
j
] with 1
≤i<j≤n
, where
[t
1, . . . ,
t
n
] is a trace. We write
TraceF
σ
for the set of trace fragments that start in
σ
, i.e.,
have
σ
as the source of their first transition. Note that all these concepts are relative to a
system
A
. When we want to emphasize the underlying system, we may write
TraceA
instead
of Trace,TraceF
A,σ instead of TraceF
σ, etc.
2.2 Flow policies
Given a system
A
= (
State,Act,Out,istate,Trans
), our goal is to express its information-flow
security via policies that are capable of fine-grained distinctions between desirable flows
(which are important for the system’s functionality) and undesirable flows (which constitute
information leaks possibly exploitable by attackers). To achieve such surgical precision, a
policy should accurately identify the following: (1) What observations can be made on the
system, (2) Which data constitute secrets that need protection, (3) How much of these
secrets should be protected (and how much can be revealed), and (4) Under which conditions
protection is required.
I T P 2 0 2 1
3:4 Bounded-Deducibility Security
For accommodating these requirements, we define a flow policy Fto consist of:
(1) an observation infrastructure (Obs,isObs,getObs), where
Obs, ranged over by o,o′etc., is a chosen domain of observations,
isObs :Trans →Bool is a predicate identifying observation-producing transitions,
getObs :Trans →Obs is a function for producing observations from transitions;
(2) asecrecy infrastructure (Sec,isSec,getSec), where
Sec, ranged over by s,s′etc., is a chosen domain of secrets,
isSec :Trans →Bool is a predicate identifying secret-producing transitions,
getSec :Trans →Sec is a function for producing secrets from transitions;
(3)
adeclassification bound, i.e., a relation on lists of secrets, B:
List
(
Sec
)
→List
(
Sec
)
→
Bool;
(4) adeclassification trigger, i.e., a predicate on transitions, T:Trans →Bool.
Note that the observation and secrecy infrastructures have the same form. We define
O:
Trace →List
(
Obs
) by O=
map getObs ◦filter isObs
, and S:
Trace →List
(
Sec
) by
S=
map getSec ◦filter isSec
. Thus, Ouses
filter
to select the transitions in a trace that are
observable according to
isObs
, and then applies
getObs
to each selected transition. Similarly,
Sproduces lists of secrets by filtering with
isSec
and applying
getSec
. Thus, when applied to
a trace tr,Oand Sgive the lists of observations and respectively secrets produced by tr.
2.3 Bounded-Deducibility security
For the rest of Section 2, let us fix a system
A
= (
State,Act,Out,istate,Trans
) and a flow
policy
F
, where (
Obs,isObs,getObs
) is its observation infrastructure, (
Sec,isSec,getSec
)
its secrecy infrastructure, Bits declassification bound and Tits declassification trigger.
Furthermore, let O:
Trace →List
(
Obs
) and S:
Trace →List
(
Sec
) be the functions on traces
induced by these observation and secrecy infrastructures.
A system
A
is said to be Bounded-Deducibility (BD) secure with respect to the flow policy
F, written A♣=F, provided that for all tr1∈Trace and sl1,sl 2∈List(Sec),
if never T tr1,Str1=sl1and Bsl1sl2,
then there exists tr2∈Trace such that Otr2=Otr 1and Str 2=sl2.
The predicate never T tr1says that Tholds for no transition in tr1.
Here is how to interpret the above definition:
tr1
is a trace that occurs when running the
system, and
sl1
is the list of secrets that it produces. BD security says that, if the trigger T
is never fired during
tr1
, it is impossible for an observer (potential attacker) to distinguish
tr1
from any other trace
tr2
that produces some secrets
sl2
that are B-related to (i.e., located
within bound Bfrom)
sl1
. Hence, for all the observer knows (via the observation function
O), the trace tr1might as well have been tr2.
When referring to the items in this definition, we will call
tr1
“the original trace” and
tr2
“the alternative trace”. We will also apply the qualifiers “original” and “alternative” to
the produced lists of observations and secrets. Note that BD security is a
∀∃
-statement:
quantified universally over the original trace
tr1
and the alternative secrets
sl2
, and then
existentially over the alternative trace
tr2
. (The universal quantification over
sl1
is done
only for clarity; it can be avoided, since sl1=Str1.)
We can think of Bnegatively, as a lower bound for uncertainty, or positively, as an upper
bound for the amount of information release, also known as declassification. For example, if
Bis an equivalence, then the observers learn the equivalence class of the secret, but nothing
more. On the other hand, Tis a trigger removing the bound B: As soon as Tbecomes true,
the containment of declassification is no longer guaranteed. In summary, BD security says:
An observer Ocannot learn about the secrets anything beyond Bunless Toccurs.
A. Popescu, T. Bauereiss, and P. Lammich 3:5
Trace
Nothing
Nothing
Nothing
List(Obs)
Nothing Nothing Nothing List(Sec)
O
S
t1t2
¬Tt′
1t′
2
t′′
1t′′
2
t′′
2
o1=o2o′′
1=o′
2
s1s2
s′
1B
s′′
1s′′
2
Figure 1 BD security illustrated.
Fig. 1 contains a visual illustration of BD security’s two-dimensional nature: The system
traces (displayed on the top left corner) produce observations (on the bottom left), as well
as secrets (on the top right). The figure also includes an abstract example of traces and
their observation and secret projections. The original trace
tr1
consists of three transitions,
tr1
= [
t1,t′
1,t′′
1
], of which all produce secrets, [s
1,
s
′
1,
s
′′
1
], and only the first and the third
produce observations, [o
1,
o
′′
1
] – all these are depicted in red. The alternative trace
tr2
also
consists of three transitions,
tr2
= [
t2,t′
2,t′′
2
], of which the first and the third produce secrets,
[s
2,
s
′′
2
], and the first two produce observations, [o
2,
o
′
2
] – all these are depicted in blue. Thus,
the figure’s functions Oand Sare given by filters and producers behaving as follows:
isObs getObs isSec getSec
t1true o1true s1
t′
1false true s′
1
t′′
1true o′′
1true s′′
1
isObs getObs isSec getSec
t2true o2true s2
t′
2true o′
2false
t′′
2false true s′′
2
The empty slots in the tables correspond to values of
getObs
and
getSec
that are irrelevant,
since the corresponding values of
isObs
and
isSec
are
false
. The
∀∃
statement expressing BD
security is illustrated on the figure by making a choice of the
∀
-quantified entities and the
∃
-quantified entities: Given the original trace, here [
t1,t′
1,t′′
1
] (which produces the shown
observations and secrets and has all its transitions satisfying
¬
T) and given some alternative
secrets, here [
s2,s′′
2
], located within the bound Bof the original secrets, BD security requires
the existence of the alternative trace, here [
t2,t′
2,t′′
2
], producing the same observations and
producing the alternative secrets.
2.4 From nondeducibility to bounded deducibility
BD security is a natural evolution of the idea of nondeducibility introduced in pioneering
work by Sutherland [46]: by refining the notion of “nothing being deducible” to that of
“nothing being deducible beyond a certain bound and unless a certain trigger occurs”.
Indeed, nondeducibility can be expressed in terms of operators O:
Trace →List
(
Obs
)
and S:
Trace →List
(
Sec
) by requiring that, for all
tr1∈Trace
and
sl1,sl2∈List
(
Sec
), if
S
tr1
=
sl1
then there exists
tr2∈Trace
such that O
tr2
=O
tr1
and S
tr2
=
sl2
. Thus,
BD security becomes nondeducibility when Bis everywhere
true
and Teverywhere
false
–
meaning no declassification, i.e., maximum uncertainty.
I T P 2 0 2 1
3:6 Bounded-Deducibility Security
2.5 Unwinding proof method
To prove that the system is BD secure with respect to the flow policy,
A♣
=
F
, one needs to
do the following: Given
the original trace tr1for which never T holds and which produces the list of secrets sl1,
and an alternative list of secrets sl2such that Bsl1sl2holds,
one should provide an alternative trace
tr2
whose produced list of secrets is exactly
sl2
and
whose produced list of observations is the same as that of tr1.
Following the tradition of unwinding for noninterference-like properties [19, 26,41], we want
to construct
tr2
from
tr1
incrementally: As
tr1
grows,
tr2
should grow nearly synchronously.
Unwindings are traditionally binary relations ∆ on
State
that bookkeep the states reached
by
tr1
and
tr2
, say
σ1
and
σ2
, and show how these can evolve transition by transition in the
process of constructing
tr2
from
tr1
; they guarantee that any ∆-related states
σ1
and
σ2
evolve via transitions σ1
t1
=⇒σ′
1and σ2
t2
=⇒σ′
2to ∆-related states σ′
1and σ′
2. In our case,
unlike in the traditional case, we have a significantly more complex infrastructure to deal
with: Since the produced observations of
tr1
and
tr2
will have to be equal, it is reasonable
to track them synchronously; but the produced secrets are regulated by arbitrary bounds B,
hence they will have to track them more flexibly.
To address the above, an unwinding for BD security will be not just a binary relation
between states, but a binary relation between pairs consisting of a state and a list of secrets.
Let us introduce some convenient notation to describe this. For any pairs (
σ, sl
) and (
σ′,sl′
)
in
State ×List
(
Sec
) and any transition
t
, we will write (
σ, sl
)
t
=⇒
(
σ′,sl′
) as a shorthand for
the following two statements: (1)
σ
t
=⇒σ′
, and (2) either
¬isSec
tand
sl′
=
sl
, or
isSec
t
and there exists ssuch that sl = [s]·sl′. The second statement means that the transition t
either does not produce a secret thus leaving
sl
unchanged (
sl′
=
sl
), or produces the secret
from the beginning of
sl
thus reducing it to
sl′
; we can think of this as a transition between
lists of secrets that are still to be produced. Moreover, for any two transitions t
1
and t
2
, we
will write t
1
=
Obs
t
2
as a shorthand for the following two statements: (1)
isObs
t
1
if and only
if
isObs
t
2
, and (2) if
isObs
t
1
then
getObs
t
1
=
getObs
t
2
. In other words, t
1
and t
2
produce
either the same observation or no observation.
A relation ∆ : (
State ×List
(
Sec
))
→
(
State ×List
(
Sec
))
→Bool
is said to be a BD
unwinding if, for all (
σ1,sl1
)
,
(
σ2,sl2
)
∈State ×List
(
Sec
) such that
σ1
is (
¬
T)-reachable,
σ2
is reachable and ∆ (
σ1,sl1
) (
σ2,sl2
), we have that one of the following three cases holds:
(1) sl1= [] or sl2= [], and reaction ∆ (σ1,sl1) (σ2,sl2); or
(2) iaction ∆ (σ1,sl1) (σ2,sl2); or
(3) sl1= [] and exit σ1(head sl1).
Above, a state being reachable means that there exists a trace
tr
leading to it; and (
¬
T)-
reachability additionally requires that all transitions in tr satisfy ¬T.
The predicates
reaction
,
iaction
(read “independent action”) and
exit
will be defined below.
The first two describe possible evolution patterns for the pairs (
σ1,sl1
) and (
σ2,sl2
) so that
the result is still in ∆. By contrast, the
exit
predicate provides a shortcut for an early finish
during a proof by unwinding. When reading the definitions of these predicates, the reader
should keep in mind what we want from a BD unwinding: to manage the incremental growth
of an alternative trace (that has currently reached state
σ2
), in response to the growth of an
original trace (that has currently reached state
σ1
), while considering the list of secrets
sl1
that the remainder of the original trace is assumed to produce and the list of secrets
sl2
that
the remainder of the alternative trace will have to produce.
A. Popescu, T. Bauereiss, and P. Lammich 3:7
reaction
∆ (
σ1,sl1
) (
σ2,sl2
) is defined to mean that, for all
t1∈Trans
and (
σ′
1,sl′
1
)
∈
State ×List(Sec) such that (σ1,sl1)t1
=⇒(σ′
1,sl′
1), one of the following two cases holds:
(1) ¬isObs t1and ∆ (σ′
1,sl′
1) (σ2,sl2); or
(2)
there exist
t2∈Trans
and (
σ′
2,sl′
2
)
∈State ×List
(
Sec
) such that (
σ2,sl2
)
t2
=⇒
(
σ′
2,sl′
2
),
t1=Obs t2and ∆ (σ′
1,sl′
1) (σ′
2,sl′
2).
Thus,
reaction
∆ (
σ1,sl1
) (
σ2,sl2
) describes two ways in which one can “react” to a
transition t
1
taken by the original trace: (1) either ignoring it (if it is unobservable), or (2)
matching it with a transition t2of the alternative trace. In both cases, we must stay in ∆.
iaction
∆ (
σ1,sl1
) (
σ2,sl2
) is defined to mean that there exist
t2∈Trans
and (
σ′
2,sl′
2
)
∈State ×List
(
Sec
) such that (
σ2,sl2
)
t2
=⇒
(
σ′
2,sl′
2
),
¬isObs
t
2
,
isSec
t
2
and ∆ (
σ1,sl1
)
(σ′
2,sl′
2).
Thus,
iaction
describes the possibility of an “independent” (i.e., non-reactive) action
by taking an unobservable secret-producing transition in the alternative trace. While the
unobservability requirement (
¬isObs
t
2
) is justified by the desire to keep the observations
synchronized, the reason for the secret-producing requirement (
isSec
t
2
) is more subtle:
Repeating unobservable and non-secret-producing independent actions could indefinitely
delay the growth of the original trace while making no progress with the alternative list of
secrets, rendering unwinding reasoning unsound.
exit σs
is defined to mean that, for all states
σ′
that are (
¬
T)-reachable from
σ
and all
transitions twith source σ′such that ¬Tt, if isSec tthen getSec t=s.
The idea behind
exit
is that BD security holds trivially for original traces that are unable
to produce their due list of secrets
sl1
; and
exit
detects this (thus closing that branch of the
unwinding proof) by noticing that not even the first secret in
sl1
can be produced starting
from the current state
σ1
– indeed, in the definition of unwinding,
exit
is invoked with
σ1
and head sl1.
Left unexplained so far are the (non)emptiness conditions guarding the invocations of the
reaction
and
exit
predicates in the definition of BD unwinding. For
exit
, it is obvious that we
need
sl1
= [] for talking about the first element in
sl1
. But for
reaction
, why require that
sl1
= [] or
sl2
= []? Again, this decision has to do with the soundness of BD unwinding as a
proof method: If the negation of this condition is true, it means that the original trace is done
with producing its secrets (
sl1
= []) and the alternative trace still has some secrets to produce
(
sl2
= []). In that case, we want to enforce an
iaction
move which, being secret-producing,
would make progress through the remaining alternative list of secrets
sl2
; this is achieved by
preventing a
reaction
move, which would be the only alternative (since an
exit
move needs
sl1= []). With these definitions, BD unwinding fulfills its goal:
▶
Lemma 1. [23, 37] Assume ∆ is a BD unwinding and let
σ1, σ2∈State
such that
reach ¬Tσ1and reach σ2. Then, for all tr1∈TraceF
σ1and sl1,sl2∈List(Sec),
if never T tr1,Str1=sl1and ∆ (σ1,sl 1) (σ2,sl 2),
then there exists tr2∈TraceF
σ2such that Otr2=Otr 1and Str 2=sl2.
In other words, assuming ∆ (
σ1,sl1
) (
σ2,sl2
) holds and given the remaining part
tr1
of
the original trace (starting in
σ1
) which produces secrets
sl1
, there exists a trace
tr2
that
produces the same observations and produces the desired secrets
sl2
. The lemma’s proof
goes by induction on the sum of the lengths of
tr1
and
sl2
. The induction step either reaches
a contradiction (if
exit
is invoked), or consumes a transition from
tr1
(if
reaction
is invoked)
or a secret from sl2(if iaction is invoked).
To connect this result to BD security, in particular to factor in the bound Bas well, we
additionally require that a BD unwinding ∆ includes the bound Bin the initial state. So
we can think of ∆ as generalizing and strengthening the bound, and then maintaining it all
I T P 2 0 2 1
3:8 Bounded-Deducibility Security
the way to the successful production of the alternative trace required by BD security. We
are closing in on the main result about BD unwinding, a consequence of the lemma taking
σ1=σ2=istate. It states that BD unwinding is a sound proof method for BD security.
▶Theorem 2. (Unwinding Theorem [23, 37]) Assume that the following hold:
(1) For all sl1,sl2∈List(Sec), if Bsl 1sl 2then ∆ (istate,sl1) (istate,sl2).
(2) ∆ is a BD unwinding.
Then A♣=F.
According to this theorem, to prove BD security of a system, it suffices to define a relation
∆ and show that (1) it includes the bound Bin the initial state and (2) it is a BD unwinding.
2.6 Proof compositionality
When verifying a BD security policy for a large system, defining a single monolithic BD
unwinding could be daunting. We can alleviate this by working not with a single unwinding
relation, but with a network of relations, such that any relation may “unwind” into any
number of relations in the network.
To this end, we refine the notion of BD unwinding. Given a relation ∆ and a set of relations
∆s
, ∆ is a said to be a BD unwinding into
∆s
if it satisfies the same conditions as in the
definition of BD unwinding, just that
iaction
∆ and
reaction
∆ are replaced by
iaction
(
W∆s
)
and
reaction
(
W∆s
), where
W∆s
is the disjunction (i.e., union) of all the relations in
∆s
.
Namely, for all (
σ1,sl1
)
,
(
σ2,sl2
)
∈State ×List
(
Sec
) such that
σ1
is (
¬
T)-reachable,
σ2
is
reachable and ∆ (σ1,sl1) (σ2,sl2), one of the following three cases holds:
(1) sl1= [] or sl2= [], and reaction (W∆s) (σ1,sl1) (σ2,sl 2); or
(2) iaction (W∆s) (σ1,sl1) (σ2,sl2); or
(3) sl1= [] and exit σ1(head sl1).
This enables a form of sound compositional reasoning: If we verify a condition as above
for each component relation, we obtain an overall secure system.
▶
Theorem 3. (Multiplex Unwinding Theorem [37]) Let
∆s
be a set of relations. For each
∆
∈∆s
, let
next∆⊆∆s
be a (possibly empty) set of “successors” of ∆, and let ∆
init ∈∆s
be
a chosen “initial” relation. Assume the following hold:
(1) For all sl1,sl2∈List(Sec), if Bsl 1sl 2then ∆init (istate,sl1) (istate,sl2).
(2) Each ∆ ∈∆sis a BD unwinding into next∆.
Then A♣=F.
The network of components can form any directed graph – Fig. 2 shows an example.
However, when doing concrete proofs by unwinding, we found that the following essentially
linear network often suffices (Fig. 3): Each ∆
i
unwinds either into itself, or into ∆
i+1
(if
i
=
n
),
or into an exit component
∆e
that always chooses the “exit” unwinding condition. (In practice,
∆e
will collect “error” situations that break invariants, hence preventing the original trace from
producing its due secrets.) To express this, we define the notion of ∆ being a BD continuation-
unwinding into
∆s
similarly to that of “BD unwinding into” but excluding the exit case, i.e.,
requiring that either (1)
sl1
= [] or
sl2
= [], and
reaction
(
W∆s
) (
σ1,sl1
) (
σ2,sl2
), or (2)
iaction
(
W∆s
) (
σ1,sl1
) (
σ2,sl2
) hold. And ∆ is said to be a BD exit-unwinding if the exit
case, (3) sl1= [] and exit σ1(head sl1), holds. We obtain:
▶
Theorem 4. (Sequential Multiplex Unwinding Theorem [37]) Consider the indexed set of
relations ¶∆1, . . . , ∆n♢and the relation ∆esuch that the following hold:
A. Popescu, T. Bauereiss, and P. Lammich 3:9
∆1
∆2
∆3
ii
∆4
OO
Figure 2 A network of unwinding compon-
ents.
∆1
&&
//∆2
//. . . //∆n
xx
∆e
exit
Figure 3 A linear network with exit.
(1) For all sl1,sl2∈List(Sec), if Bsl 1sl 2then ∆1(istate,sl1) (istate,sl2).
(2) ∆iis a BD continuation-unwinding into ¶∆i,∆i+1 ,∆e♢.
(3) ∆eis a BD exit-unwinding.
Then A♣=F.
Although the Multiplex Unwinding Theorems are easy consequences of the (plain) Un-
winding Theorem, we found them to be very useful tools for managing proof complexity.
2.7 System compositionality
A complexity management desideratum equally important to proof compositionality is system
compositionality: the possibility to infer BD security for a compound system from BD security
of the components. Next, we will describe a compositionality result for a communicating
network of systems. We start with two, then we generalize to nsystems.
2.7.1 Product systems
Let
A1
= (
State1,Act1,Out1,istate1,Trans1
) and
A2
= (
State2,Act2,Out2,istate2,Trans2
) be
two systems. We want to model communication between
A1
and
A2
by matching certain
transitions that these systems must take synchronously while exchanging data. This is
captured by a relation
match
:
Trans1→Trans2→Bool
. Transition matching gives a very
flexible communication scheme: It can model message-passing communication using the
transitions’ actions and outputs, but also shared-state communication using the transitions’
source and target states.
We will distinguish between separate (local) component actions and communication
actions. We write
isComia
(for
i∈ ¶
1
,
2
♢
) to indicate that an action
a
is in the latter category
for
Ai
. Namely,
isComia
holds whenever there exist t
1
and t
2
such that
match
t
1
t
2
holds
and ais the action of ti.
We define the
match
-communicating product of
A1
and
A2
, written
A1×match A2
, as the
following system (State,Act,Out,istate,Trans):
State =State1×State2;
Act
=
Act1
+
Act2
+
Act1×Act2
; thus,
Act
is a disjoint union of
Act1
(representing separate
actions of the first component),
Act2
(for separate actions of the second component), and
Act1×Act2
(for joint communicating actions); we write (1
,a1
), (2
,a2
), and (
a1,a2
) for
actions of the first, second and third kind, respectively;
Out
=
Out1
+
Out2
+
Out1×Out2
; thus, like
Act
,
Out
is a disjoint union, and we use
similar notations for its elements: (1,ou1), (2,ou2) and (ou1,ou2);
istate = (istate1,istate2);
Trans contains three kinds of transitions:
I T P 2 0 2 1
3:10 Bounded-Deducibility Security
separate A1-transitions ((σ1, σ2),(1,a1),(1,ou1),(σ′
1, σ2)),
where (σ1,a1,ou1, σ′
1)∈Trans1and ¬isCom1a1;
separate A2-transitions ((σ1, σ2),(2,a2),(2,ou2),(σ1, σ′
2)),
where (σ2,a2,ou2, σ′
2)∈Trans2and ¬isCom2a2;
communication transitions ((σ1, σ2),(a1,a2),(ou1,ou2),(σ′
1, σ′
2)),
where (σ1,a1,ou1, σ′
1)∈Trans1, (σ2,a2,ou2, σ′
2)∈Trans2
and match (σ1,a1,ou1, σ′
1) (σ2,a2,ou2, σ′
2).
Thus, a transition tof
A1×match A2
has exactly one of the following three forms shown
above. In the first case, tis completely determined by an
A1
-transition t
1
= (
σ1,a1,ou1, σ′
1
)
and an
A2
-state
σ2
– we write t=
sep1
t
1σ2
, marking that tis given by the separate transition
t
1
. Similarly, in the second case we write t=
sep2σ1
t
2
, where t
2
= (
σ2,a2,ou2, σ′
2
). In the
third case, we write t=
com
t
1
t
2
, marking that tproceeds as a communication transition.
Thus, in our new notation, any transition of
A1×match A2
has either the form
sep1
t
1σ2
, or
sep2σ1t2, or com(t1,t2).
2.7.2 Product flow policies
Let
F1
and
F2
be flow policies for
A1
and
A2
. Given
i∈ ¶
1
,
2
♢
, we write (
Obsi,isObsi,getObsi
)
for the observation infrastructure, (
Seci,isSeci,getSeci
) for the secrecy infrastructure, B
i
for
the declassification bound and T
i
for the declassification trigger of
Fi
. We want to compose the
policies
F1
and
F2
in a natural way, forming a policy for the product
A1×match A2
. To achieve
this, we need observation and secret counterparts of the transition-matching predicate
match
,
in the form of predicates
matchO
:
Obs1→Obs2→bool
and
matchS
:
Sec1→Sec2→bool
.
Triples (match,matchO,matchS) will be called communication infrastructures.
A sanity property that we will assume about our communication infrastructures is that
its matching operators are compatible with (i.e., preserved by) the secrecy and observation
infrastructure operators.
Compatible Communication: For all t1∈Trans1and t2∈Trans2, if match t1t2then:
isSec1
t
1
if and only if
isSec2
t
2
, and in this case we have
matchS
(
getSec1
t
1
) (
getSec2
t
2
);
isObs1
t
1
if and only if
isObs2
t
2
, and in this case we have
matchO
(
getObs1
t
1
) (
getObs2
t
2
).
The product of
F1
and
F2
along a communication infrastructure (
match,matchO,matchS
),
written F1×
(match,matchO,matchS)F2, is defined as the following flow policy for A1×match A2.
We start with its observation and secrecy infrastructures, which are naturally defined con-
sidering that observations and secrets can be produced either separately or in communication
steps. The observation infrastructure (Obs,isObs,getObs) is the following:
Obs1
+
Obs2
+
Obs1×Obs2
; thus, an element of
Obs
will have either the form (1
,o1
), or
(2,o2), or (o1,o2), where oi∈Obsi.
For any t∈Trans,isObs tand getObs tare defined as follows:
if thas the form sep1t1σ2, then isObs t=isObs1t1and getObs t= (1,getObs1t1);
if thas the form sep2σ1t2, then isObs t=isObs2t2and getObs t= (2,getObs2t2);
if thas the form
com
t
1
t
2
, then
isObs
t= (
isObs1
t
1
and
isObs2
t
2
) and
getObs
t=
(getObs1t1,getObs2t2).
One could argue that, when thas the form
com
t
1
t
2
,
isObs
tshould be defined not as
(1)
isObs1
t
1
and
isObs2
t
2
, but as (2)
isObs1
t
1
or
isObs2
t
2
, thus making the compound
transition observable if either component transition is observable. However, we will only
work under the assumption of Compatible Communication (introduced above), which makes
(1) and (2) equivalent.
A. Popescu, T. Bauereiss, and P. Lammich 3:11
Sep1
sl ∈sl1×matchS sl 2¬isComS1s1
sl ·[(1,s1)] ∈(sl1·[s1]) ×matchS sl 2
Sep2
sl ∈sl1×matchS sl 2¬isComS2s2
sl ·[(2,s2)] ∈sl1×matchS (sl 2·[s2])
Empty ·
[] ∈[] ×matchS []
Com sl ∈sl1×matchS sl 2matchS s1s2
sl ·[(s1,s2)] ∈(sl 1·[s1]) ×matchS (sl2·[s2])
Figure 4 Shuffle product for lists of secrets.
The secrecy infrastructure (
Sec,isSec,getSec
) is defined similarly to the observation
infrastructure:
Sec
is taken to be
Sec1
+
Sec2
+
Sec1×Sec2
, and
isSec
and
getSec
are defined
correspondingly.
The trigger Tof the product flow policy is also the natural one: Any firing of the trigger
on either side, either separately or during communication, will fire the composite trigger.
Formally, we take Ttto mean the following: (1) if thas the form
sep1
t
1σ2
, then T
1
t
1
holds; (2) if thas the form
sep2σ1
t
2
, then T
2
t
2
holds; (3) if thas the form
com
t
1
t
2
, then
T1t1holds or T2t2holds.
It remains to define the bound Bof the product flow policy. Let
sl ∈List
(
Sec
) be a list of
secrets in the composite secret domain. Intuitively, the most restrictive bound Bwe can hope
for will forbid the declassification, for any lists of secrets
sl1∈List
(
Sec1
) and
sl2∈List
(
Sec2
)
into which
sl
can be decomposed (i.e., which can be combined to make up
sl
), of anything
beyond what can be declassified about
sl1
and
sl2
within the components’ bounds B
1
and B
2
.
To capture this, we collect all valid ways of combining
sl1
and
sl2
, via the
matchS
-
shuffle product operator
×matchS
:
List
(
Sec1
)
→List
(
Sec2
)
→Set
(
List
(
Sec
)) whose inductive
definition is shown in Fig. 4. The set
sl1×matchS sl2
contains all possible interleavings of
sl1
and
sl2
, achieved by separate individual steps (rule Sep
1
and Sep
2
) and communication
steps (rule Com). For
i∈ ¶
1
,
2
♢
,
isComSis
is the secret counterpart of the predicate
isComi
,
expressing that the secret
s
participates in a
matchS
-relationship. We define B
sl sl′
to
mean that, for all
sl1,sl′
1∈List
(
Sec1
) and
sl2,sl′
2∈List
(
Sec2
), if
sl ∈sl1×matchS sl2
and
sl′∈sl′
1×matchS sl′
2, then B1(sl1,sl′
1) and B2(sl2,sl′
2) hold.
2.7.3 Compositionality result
We next introduce some properties that refer to the flow policies
F1
and
F2
and the commu-
nication infrastructure (
match,matchO,matchS
). Together with Compatible Communication,
they will be sufficient for compositionality.
Strong Communication: For all t1∈Trans1and t2∈Trans2, if the following hold:
isCom1(actOf1t1) and isCom2(actOf 2t2),
isObs1t1,isObs2t2and matchO (getObs1t1) (getObs2t2),
isSec1t1and isSec2t2imply matchS (getSec1t1) (getSec2t2),
then match t1t2holds.
The property says that, for observable communicating transitions, observation matching
together with secret matching (the latter conditional on secrecy) causes the matching of the
entire transitions.
Observable Communication: For all t
1∈Trans1
,
isCom1
(
actOf1
t
1
) implies
isObs1
t
1
; and for
all t2∈Trans2,isCom2(actOf2t2) implies isObs2t2.
The property says that all communicating transitions are observable (i.e.,
isObs
is true
for them), although it does not say anything about what can actually be observed about
them (via getObs).
I T P 2 0 2 1
3:12 Bounded-Deducibility Security
Secret Polarization: For all t2∈Trans2,isSec2t2implies isCom2(actOf2t2).
The property says that any
A2
-transition that is secret-producing must be a communic-
ating transition, which means that only A1is able to produce secrets independently.
We are now ready to state our system compositionality result about BD security:
▶
Theorem 5. (System Compositionality Theorem [8]) Assume that the flow policies
F1
and
F2
and the communication infrastructure (
match,matchO,matchS
) satisfy all the above prop-
erties, namely Compatible, Strong and Observable Communication, and Secret Polarization.
Moreover, assume
A1♣
=
F1
and
A2♣
=
F2
. Then
A1×match A2♣
=
F1×(match,matchO,matchS)F2
.
In [8], we discuss in great detail this theorem’s assumptions in the context of verifying a
concrete distributed system. The main strength of the theorem is that it allows composing
general bounds and triggers. For this to work, we put restrictions on the observation and
secrecy infrastructures. Among these, Compatible Communication seems to occur naturally
in communicating systems – at least in our case studies of interest, which are multi-user web-
based systems. When targeting such systems, Strong and Observable Communication seem to
be achievable for a given desired policy via a uniform process of strengthening the observation
and secrecy infrastructures: allowing one to observe as much non-sensitive information as
possible, and making minor adjustments to the bounds and triggers to accommodate the
additional harmless information unblocked [8, App. B].
On the other hand, Secret Polarization is the major limitation of the theorem.
1
For
multi-user systems, this means that, for the notion of secret defined by the flow policies
F1
and
F2
, only users of one of the two component systems,
A1
, can be allowed to upload
secrets. However, this does not prevent us from considering another notion of secret, where
the other component is the issuer, as part of a different pair of flow policies F′
1and F′
2.2
Finally, an inconvenience of applying the theorem is the somewhat artificial nature of
the composite bound. While by design the composite bound is as restrictive as possible
(which is good for accuracy), in practice we would prefer a less restrictive but more readable
bound, referring to secrets of a simpler nature than the composite secrets. To obtain this, we
can perform an adjustment using a general-purpose theorem that transports a BD security
property between different observation and secret domains, possibly loosening the bound
and weakening the trigger, i.e., overall weakening the flow policy.
This works as follows. Let
F
and
F′
be two flow policies for a system
A
, where we
write (
Obs,isObs,getObs
) and (
Obs′,isObs′,getObs′
) for their observation infrastructures,
and similarly for their secrecy infrastructures, bounds and triggers. F′is said to be weaker
than
F
, written
F′≤F
, if there exist two partial functions
f
:
Sec ⇀Sec′
and
g
:
Obs ⇀Obs′
that preserve the secrecy and observation infrastructures, the bounds and the triggers, i.e.,
such that the following hold:
isSec′
tif and only if
isSec
tand
f
is defined on
getSec
t, and in this case
f
(
getSec
t) =
getSec′t;
isObs′
tif and only if
isObs
tand
g
is defined on
getObs
t, and in this case
g
(
getObs
t) =
getObs′t;
Ttimplies T′t;
B
′sl′tl′
and
map fsl
=
sl′
imply that there exists
tl
such that
map ftl
=
tl′
and B
sl tl
.
1
In [8, Sec. V.8], we discuss in great detail the technical reasons for requiring Secret Polarization, which
have to do with BD security favoring the under-speciĄcation of the time ordering between observations
and secrets.
2
See also [8, App. E] for a discussion on combining independent secret sources for more holistic multi-policy
security guarantees.
A. Popescu, T. Bauereiss, and P. Lammich 3:13
▶Theorem 6. (Transport Theorem [8]) If A♣=Fand F′≤F, then A♣=F′.
In conclusion, one can use the System Compositionality Theorem to obtain for the
composite system
A1×match A2
a flow policy
F
=
F1×(match,matchO,matchS)F2
with a strong
bound, and the Transport Theorem to produce from this a perhaps weaker but more natural
flow policy
F′
(for the same system
A1×match A2
). [8, App.A] gives more intuition on using
the two theorems in tandem.
2.7.4 The n-ary case
The System Compositionality Theorem generalizes quite smoothly from the binary to the
n
-ary case. Let (
Ak
= (
Statek,Actk,Outk,istatek,Transk
))
k∈¶1,. . .,n♢
be a family of
n
systems.
We fix, for each
k,k′
with
k
=
k′
, a matching predicate
matchk,k′
:
Transk×Transk′→Bool
.
We write
match
for the family (
matchk,k′
)
k,k′
and
isComk,k′
:
Actk→Bool
for the corresponding
notion of communication action (belonging to
Ak
and pertaining to communication with
Ak′
). We will make the sanity assumption that a system cannot use the same action to
communicate with different systems.
Pairwise-Dedicated Communication: If
k′
=
k′′
, then for all
k
the predicates
isComk,k′
and
isComk,k′′
are disjoint, in that there exists no
a∈Actk
such that
isComk,k′a
and
isComk,k′′ a
. The
match
-communicating product of the family of systems (
Ak
)
k∈¶1,. . .,n♢
, written
Qmatch
k∈¶1,. . .,n♢Ak
, generalizes of the binary case. Namely, it is the following system
(State,Act,Out,istate,Trans):
State =Qk∈¶1,. . .,n♢Statek; so the states are families (σk)k∈¶1,. . .,n♢, or (σk)kfor short;
Act
=
Pk∈¶1,. . .,n♢Actk
+
Pk,k′∈¶1,. . .,n♢,k=k′Actk×Actk′
; we write (
i,ai
) for elements
of the
i
’th summand on the left (separate actions by component
Ai
), and ((
i,ai
)
,
(
j,aj
))
for elements of the (
i,j
)’th summand on the right (joint communicating actions by
components Aiand Aj);
Out =Pk∈¶1,. . .,n♢Outk+Pk,k′∈¶1,. . .,n♢,k=k′Outk×Outk′(similarly to Act);
istate = (istatek)k∈¶1,. . .,n♢;
Trans contains two kinds of transitions:
for
i∈ ¶
1
, . . . , n♢
, separate
Ai
-transitions ((
σk
)
k,
(
i,ai
)
,
(
i,oui
)
,
(
σk
)
k
[
i
:=
σ′
i
]), where
(σi,ai,oui, σ′
i)∈Transiand ¬isComiai;
for i,j∈ ¶1, . . . , n♢such that i=j, communication transitions (between Aiand Aj)
((
σk
)
k,
((
i,ai
)
,
(
j,aj
))
,
((
i,oui
)
,
(
j,ou j
))
,
(
σk
)
k
[
i
:=
σ′
i,j
:=
σ′
j
]), where (
σi,ai,oui, σ′
i
)
∈Transi, (σj,aj,ou j, σ′
j)∈Transjand matchi,j(σi,ai,oui, σ′
i)(σj,aj,ou j, σ′
j).
Above, we wrote (
σk
)
k
[
i
:=
σ′
i
] for the family of states that is the same as (
σk
)
k
, except for
the index iwhere it is updated from σito σ′
i; and similarly for (σk)k[i:= σ′
i,j:= σ′
j].
Given the flow policies
Fk
for the component systems
Ak
and the families of matching
predicates for transitions,
match
= (
matchk,k′
)
k,k′
, observations,
matchO
= (
matchOk,k′
)
k,k′
,
and secrets,
matchS
= (
matchSk,k′
)
k,k′
, the product flow policy
Q(match,matchO,matchS)
k∈¶1,. . .,n♢Fk
is
defined as a straightforward generalization of the binary case. For example, its observation
domain is
Pk∈¶1,. . .,n♢Obsk
+
Pk,k′∈¶1,. . .,n♢,k=k′Obsk×Obsk′
, so that it contains either
separate observations (
k,ok
) or joint observations ((
k,ok
)
,
(
k′,ok′
)). Its trigger Tis defined
on separate
i
-transitions to be the trigger of the
i
component, and on (
i,j
)-communication
transitions to be the disjunction of the triggers of the
i
and
j
component. And its bound
B
sl sl′
is defined from the component bounds: For all (
slk
)
k,
(
sl′
k
)
k∈Qk∈¶1,. . .,n♢List
(
Seck
),
if
sl ∈×matchS
(
slk
)
k
and
sl′∈×matchS
(
sl′
k
)
k
, then, for all
k
,B
kslksl′
k
holds – where
×matchS
is the
n
-ary
matchS
-shuffle product operator, which applied to a family of lists of secrets
(slk)kgives all possible interleavings of these lists achieved by separate individual steps and
communication steps.
I T P 2 0 2 1
3:14 Bounded-Deducibility Security
Now we can formulate an
n
-ary generalization of the System Compositionality Theorem.
Most of its assumptions will be those of the binary version, applied to all pairs of components
(
k,k′
) for
k,k′∈ ¶
1
, . . . , n♢
and
k
=
k′
. The only exception is Secret Polarization, which must
be strengthened. It is not sufficient to have a single secret issuer for every pair (
k,k′
), but
we need a unique secret issuer for the entire system of ncomponents.
Unique Secret Polarization: There exists
i∈ ¶
1
, . . . , n♢
such that for all
k∈ ¶
1
, . . . , n♢
with
k=iand for all t∈Transk,isSecktimplies isComk,i(actOfkt).
▶Theorem 7. (System Compositionality Theorem, n-ary case [8]) Assume the following:
For all
k,k′∈ ¶
1
, . . . , n♢
such that
k
=
k′
, the flow policies
Fk
and
Fk′
and their
communication infrastructure (
matchk,k′,matchOk,k′,matchSk,k′
) satisfy the properties of
Pairwise-Dedicated, Compatible, Strong and Observable Communication.
The families (
Fk
)
k∈¶1,. . .,n♢
and (
matchk,k′,matchOk,k′,matchSk,k′
)
k,k′∈¶1,. . .,n♢,k=k′
(as a whole)
satisfy Unique Secret Polarization.
Ak♣=Fkfor all k∈ ¶1, . . . , n♢.
Then Qmatch
k∈¶1,. . .,n♢Ak♣=Q(match,matchO,matchS)
k∈¶1,. . .,n♢Fk.
In conclusion, the generalization of the System Compositionality Theorem to the
n
-ary
case proceeds almost pairwise, but with an additional sanity assumption (Pairwise-Dedicated
Communication) and a strengthened assumption (Unique Secret Polarization).
3 Verified Systems
We have formalized in Isabelle/HOL the BD security framework (consisting of Section 2’s
concepts and theorems) [10, 35]. Recall that the framework operates on nondeterministic
I/O automata. We have instantiated it to particular (deterministic) automata representing
the functional kernels of some web-based systems. Fig. 5 shows the high-level architecture of
these systems, which follows a paradigm of security by design:
The kernel is formalized and verified in Isabelle.
The formalization is automatically translated into a functional programming language
– which in all our case studies was Scala, one of the target languages of Isabelle’s code
generator [20, 21].
The translated program is wrapped in a user-friendly web application.
3.1 CoCon
CoCon [23,36,37] is an EasyChair-like conference management system, which was deployed to
two international conferences: TABLEAUX 2015 and ITP 2016 [37, §5]. The web application
Web
Application
Functional
Program
Isabelle
Specification
code generation
Figure 5 High-level architecture of the veriĄed systems.
A. Popescu, T. Bauereiss, and P. Lammich 3:15
Table 1 ConĄdentiality properties for CoCon. The observations are made by a group of users
G
.
Secrets Declassification Trigger Declassification Bound
Paper Some user in Gis
one of the paperŠs authors Last uploaded version
Some user in Gis
one of the paperŠs authors
or a PC memberB
Absence of any upload
Review Some user in Gis the reviewŠs author
Last edited version
before Discussion and
all the later versions
Some user in Gis the reviewŠs author
or a non-conĆicted PC memberD
Last edited version
before NotiĄcation
Some user in Gis the reviewŠs author
or a non-conĆicted PC memberD
or the reviewed paperŠs authorN
Absence of any edit
Discussion Some user in Gis
a non-conĆicted PC member Absence of any edit
Decision Some user in Gis
a non-conĆicted PC member Last edited version
Some user in Gis
a non-conĆicted PC member
or a PC memberN
or the decided paperŠs authorN
Absence of any edit
Reviewer
assignment
Some user in Gis
a non-conĆicted PC memberR
Reviewers being
non-conĆicted PC members,
and number of reviewers
Some user in Gis
a non-conĆicted PC memberR
or one of the reviewed paperŠs authorsN
Reviewers being
non-conĆicted PC members
Phase Stamps: B= Bidding, D= Discussion, N= NotiĄcation, R= Review
layer of Fig. 5 was realized as a thin REST API implemented in Scalatra [45] wrapped around
the verified kernel together with a stateless GUI written in AngularJS [2] that communicates
with the API.
CoCon was our first case study, which motivated the initial design and formalization of the
BD security framework. Our goal to express, let alone verify, fine-grained policies concerning
the flow of information in CoCon between users and documents, could not be supported
by the existing concepts in the literature. (See [23, §4.1] for a discussion.) Examples of
properties we wanted to express are:
(1)
A group of users learn nothing about a paper beyond the last uploaded version unless
one of them becomes an author of that paper.
(2)
A group of users learn nothing about a paper beyond the absence of any upload unless
one of them becomes an author of that paper or a PC member at the paper’s conference.
(3)
A group of users learn nothing about the content of a review beyond the last edited
version before Discussion phase and the later versions unless one of them is that review’s
author.
The BD security trigger and bound were born out of the need to formally capture the
“unless” and “beyond” components of such properties. Tab. 1 summarizes informally the
CoCon properties we have expressed in our framework as flow policies. The observation
I T P 2 0 2 1
3:16 Bounded-Deducibility Security
Table 2 ConĄdentiality properties for the original CoSMed. The observations are made by a
group of users G. The trigger is vacuously false.
Secrets Declassification Bound
Content of a given post
Updates performed while or last before
one of the following holds:
Ű Some user in Gis the admin,
is the post owner
or is friends with its owner
Ű The post is marked as public
Friendship status between
two given users Uand V
Status changes performed while or last before
the following holds:
Ű Some user in G is the admin
or is friends with Uor V
Friendship requests between
two given users Uand V
Existence of accepted requests while or last before
the following holds:
Ű Some user in Gis the admin
or is friends with Uor V
Table 3 ConĄdentiality properties for CoSMeDis, lifted from CoSMed. The observations are
made by
n
groups of users Ű one group
Gi
at each node
i
. The declassiĄcation trigger is vacuously
false.
Secrets Declassification Bound
Content of a given post at node i
Updates performed while or last before
one of the following holds:
Ű Some user in Giis the nodeŠs admin,
is the post owner
or is friends with its owner
Ű The post is marked as public
Ű Some user in Gjfor j=iis the admin at node j
or is remote friends with the postŠs owner
Friendship status between
two given users Uand Vat node i
Status changes performed while or last before
the following holds:
Ű Some user at node iis the nodeŠs admin
or is friends with Uor V
Friendship requests between
two given users Uand Vat node i
Existence of accepted requests while or last before
the following holds:
Ű Some user at node iis the nodeŠs admin
or is friends with Uor V
infrastructure is always the same, given by the actions and outputs of a fixed group
G
of users.
The secrecy infrastructures are given by the various documents managed by the system (paper
content, review, discussion, decision) but also, in the table’s last two rows, by information
about the reviewers assigned to a paper. These properties should be read as follows: A group
of users learns nothing about the given secret (more precisely, about all the uploads or edits
performed on a document in the indicated “secret” category) beyond the indicated bound,
unless the indicated trigger becomes true. For example, the above properties (1)–(3) are
the first three shown in the table, with slightly stronger triggers factoring in the conference
phase as well, which we indicate succinctly via “phase stamps” – e.g., the presence of the
phase stamp “D” indicates the requirement that the conference must have moved into the
Discussion phase. For each type of secret, we have a range of increasingly restrictive bounds
matched by increasingly weaker triggers – indeed, the more we tighten the bound (meaning
A. Popescu, T. Bauereiss, and P. Lammich 3:17
we allow less information to flow), the weaker the trigger becomes (since there are more
events that could break the bound). This bound–trigger dynamics exhaustively characterizes
the possible flows in the system.
The notion of BD unwinding was developed and refined during the verification of CoCon’s
policies. The opportunity to take proof shortcuts (via the
exit
predicate) was discovered
during practical “proof hacking” sessions, and led to major simplifications in the development.
The different unwinding components in the Sequential Multiplex Unwinding Theorem were
naturally mapped to the different phases of a conference’s workflow.
3.2 CoSMed
CoSMed [9, 11] is a simple Facebook-style social media platform, where users can register,
create posts and establish friendship relationships. It was implemented following the same
high-level architecture as CoCon. But unlike CoCon, CoSMed is only a research prototype,
not intended for practical use.
CoSMed’s confidentiality properties raised new challenges and inspired a more expressive
way of modeling flows. In the style of CoCon, we could have specified and proved properties
such as:
A group of users learn nothing about a post unless one of them is the admin, or is the
post’s owner, or becomes friends with the owner, or the post gets marked as public.
Remember that the trigger introduced via “unless” expresses a condition in whose presence
the property stops guaranteeing anything – in other words, a trigger opens an access window
indefinitely. While true, such a property is not strong enough to be useful for CoSMed, where
both friendship and public visibility can be freely switched on and off by the owner at any
time (e.g., by “unfriending” a user, and later “friending” them again). Instead, we wanted to
prove more dynamic flow policies, reflecting any number of successive openings and closings
of the access windows during system execution.
Tab. 2 summarizes informally the BD security properties that we ended up proving for
CoSMed. The observation infrastructure is again given by a group
G
of users, and the secrecy
infrastructure refers to either the content of a given post, or to information on the friendship
status between two users or on the issued friendship requests. For example, the property on
the first row is the dynamic-flow refinement of the coarser property discussed above:
A group of users learn nothing about a post beyond the updates performed while (or last
before) one of them is the admin, or is the post’s owner, or becomes friends with the
owner, or the post is marked as public.
Thus, the “beyond–unless” bound-trigger combination we had employed for CoCon gave
way to a “beyond–while” scheme for CoSMed, where “while” refers to the allowed access
windows. To achieve this formally, we made the triggers vacuously false (i.e., deactivated
them completely) and incorporated the opening and closing of access windows in inductively
defined bounds. [9] discusses in detail this paradigm shift, which however did not require
adjustments to the framework itself.
3.3 CoSMeDis
CoSMeDis [8, 12] is a multi-node distributed extension of CoSMed that follows a Diaspora-
style scheme [1]: Different nodes can be deployed independently at different internet locations.
The admins of any two nodes can initiate a protocol to connect these nodes, after which
the users of one node can establish friendship relationships and share data with users of the
other. Thus, a node of CoSMeDis consists of CoSMed plus actions for connecting nodes and
cross-node post sharing and friending.
I T P 2 0 2 1
3:18 Bounded-Deducibility Security
Our goal was to extend the confidentiality properties we had verified for CoSMed first to
one CoSMeDis node, then to the multi-node CoSMeDis network. [8] describes in great detail
this verification extension effort, which led to the discovery of the System Compositionality
Theorems. The outcome was the properties shown in Tab. 3, which are natural multi-node
generalizations of CoSMed’s properties (from Tab. 2). They were obtained by applying the
n
-ary System Compositionality Theorem, then the Transport Theorem to switch to more
readable secrets and bounds.
4 Related Work
We only discuss briefly the most related work, focusing on the general framework rather than
the verification case studies. For more comprehensive literature comparisons (which also
cover verification), we refer to our earlier papers [8, 9, 37].
Since we aimed for high expressiveness and precision, we defined BD security by quantifying
over execution traces of general systems. This “heavy duty” approach, sometimes called
system-based security [26], can be contrasted with language-based security [42], concerned with
coarser-grained but tractable notions that can be automatically analyzed on programming
language syntax.
BD security provides an expressive realization of Sabelfeld and Sands’s dimensions of
declassification [44] in a system-based setting. It descends from the epistemic logic [40]
inspired tradition of modeling information-flow security, pioneered by Sutherland with
Nondeducibility [46] and continued with Halpern and O’Neill’s Secrecy Maintenance [22]
and with Askarov et al.’s Gradual Release [3
–
6], the latter developed in a language-based
setting. Our BD unwinding is a non-trivial generalization of unwinding proof methods going
back to Goguen and Meseguer [19] and Rushby [41], which have been extensively studied as
part of Mantel’s MAKS framework [24, 26]. Unlike these predecessors which use safety-like
unwinding conditions, BD unwinding combines safety with liveness: In the BD unwinding
game, the “defender”, who builds the alternative trace tr2, must
not only be able to always stay in the game – a safety-like property,
but also be able to eventually produce the alternative secrets
sl2
(provided the “attacker”,
who controls the original trace
tr1
, has produced all the original secrets
sl1
) – a liveness-like
property.
Because of the restrictive way of handling the liveness part of the aforementioned game, BD
unwinding is not a complete proof method, in that it cannot prove every instance of BD
security. We leave a complete extension of BD unwinding as future work.
Our system compositionality result joins a body of technically delicate work in system-
based security, where the difficult terrain was recognized early on [27]. Several frameworks
have been developed in various settings, e.g., event systems [25], reactive systems [39]
and process calculi [13, 17]. Some of these focus on formulating very restricted classes of
security properties that are always guaranteed to be preserved under a given notion of
composition, such as McCullough’s Restrictiveness [28]. Others, such as Mantel’s MAKS
framework [24, 25], formulate side conditions on the components’ security properties that
guarantee compositionality. Our result is in the latter category, and refers to a significantly
more expressive notion of information-flow security than its predecessors (which is not to say
that our result subsumes these previous results).
Temporal logics designed for information-flow security, such as SecLTL [15] and
HyperCTL
∗
[14,16, 38], can express similar-looking properties to the instances of BD security
we verified for CoCon – though semantically they differ by interpreting trace quantification
synchronously.
A. Popescu, T. Bauereiss, and P. Lammich 3:19
References
1The Diaspora project. https://diasporafoundation.org/, 2016.
2The AngularJS Web Framework, 2021. URL: https://angularjs.org/.
3Aslan Askarov and Stephen Chong. Learning is change in knowledge: Knowledge-based
security for dynamic policies. In CSF, pages 308Ű322, 2012.
4Aslan Askarov and Andrew C. Myers. Attacker control and impact for conĄdentiality and
integrity. Logical Methods in Computer Science, 7(3), 2011.
5Aslan Askarov and Andrei Sabelfeld. Gradual release: Unifying declassiĄcation, encryption
and key release policies. In IEEE Symposium on Security and Privacy, pages 207Ű221, 2007.
6Aslan Askarov and Andrei Sabelfeld. Tight enforcement of information-release policies for
dynamic languages. In CSF, pages 43Ű59, 2009.
7
Thomas Bauereiss, Armando Pesenti Gritti, Andrei Popescu, and Franco Raimondi. CoSMed:
A ConĄdentiality-VeriĄed Social Media Platform. In ITP, 2016.
8Thomas Bauereiss, Armando Pesenti Gritti, Andrei Popescu, and Franco Raimondi.
CoSMeDis: A distributed social media platform with formally veriĄed conĄdentiality
guarantees. In IEEE Symposium on Security and Privacy, pages 729Ű748, 2017.
9
Thomas Bauereiss, Armando Pesenti Gritti, Andrei Popescu, and Franco Raimondi. CoSMed:
A ConĄdentiality-VeriĄed Social Media Platform. J. Autom. Reasoning, 61(1-4):113Ű139,
2018.
10 Thomas Bauereiss and Andrei Popescu. Compositional BD Security. Archive of Formal
Proofs, 2021. URL:
https://www.isa-afp.org/entries/Compositional_BD_Security.html
.
11 Thomas Bauereiss and Andrei Popescu. CoSMed: A conĄdentiality-veriĄed social media
platform. Archive of Formal Proofs, 2021. URL:
https://www.isa-afp.org/entries/CoSMed.html.
12 Thomas Bauereiss and Andrei Popescu. CoSMeDis: A conĄdentiality-veriĄed distributed
social media platform. Archive of Formal Proofs, 2021. URL:
https://www.isa-afp.org/entries/CoSMeDis.html.
13 Annalisa Bossi, Damiano Macedonio, Carla Piazza, and Sabina Rossi. Information Ćow in
secure contexts. Journal of Computer Security, 13(3):391Ű422, 2005. URL:
http://content.iospress.com/articles/journal-of-computer-security/jcs235.
14 Michael R. Clarkson, Bernd Finkbeiner, Masoud Koleini, Kristopher K. Micinski, Markus N.
Rabe, and César Sánchez. Temporal logics for hyperproperties. In POST, pages 265Ű284,
2014.
15 Rayna Dimitrova, Bernd Finkbeiner, Máté Kovács, Markus N. Rabe, and Helmut Seidl.
Model checking information Ćow in reactive systems. In VMCAI, pages 169Ű185, 2012.
16 Bernd Finkbeiner, Markus N. Rabe, and César Sánchez. Algorithms for model checking
HyperLTL and HyperCTL ˆ*. In International Conference on Computer Aided Verification,
pages 30Ű48. Springer, 2015.
17 Riccardo Focardi and Roberto Gorrieri. ClassiĄcation of security properties (Part I:
Information Ćow). In FOSAD, pages 331Ű396, 2000.
18 Joseph A. Goguen and José Meseguer. Security policies and security models. In IEEE
Symposium on Security and Privacy, pages 11Ű20, 1982.
19
Joseph A. Goguen and José Meseguer. Unwinding and inference control. In IEEE Symposium
on Security and Privacy, pages 75Ű87, 1984.
20
Florian Haftmann. Code Generation from Specifications in Higher-Order Logic. Ph.D. thesis,
Technische Universität München, 2009.
21
Florian Haftmann and Tobias Nipkow. Code generation via higher-order rewrite systems. In
FLOPS 2010, pages 103Ű117, 2010.
22 Joseph Y. Halpern and Kevin R. OŠNeill. Secrecy in multiagent systems. ACM Trans. Inf.
Syst. Secur., 12(1), 2008.
23
Sudeep Kanav, Peter Lammich, and Andrei Popescu. A conference management system with
veriĄed document conĄdentiality. In CAV, pages 167Ű183, 2014.
I T P 2 0 2 1
3:20 Bounded-Deducibility Security
24
Heiko Mantel. Possibilistic deĄnitions of security - an assembly kit. In CSFW, pages 185Ű199,
2000.
25 Heiko Mantel. On the composition of secure systems. In IEEE Symposium on Security and
Privacy, pages 88Ű101, 2002.
26 Heiko Mantel. A Uniform Framework for the Formal Specification and Verification of
Information Flow Security. PhD thesis, University of Saarbrücken, 2003.
27 Daryl McCullough. SpeciĄcations for multi-level security and a hook-up property. In IEEE
Symposium on Security and Privacy, 1987.
28 Daryl McCullough. A hookup theorem for multilevel security. IEEE Trans. Software Eng.,
16(6):563Ű568, 1990.
29 John McLean. Security models. In Encyclopedia of Software Engineering, 1994.
30
Toby C. Murray, Andrei Sabelfeld, and Lujo Bauer. Special issue on veriĄed information Ćow
security. Journal of Computer Security, 25(4-5):319Ű321, 2017.
31 Tobias Nipkow and Gerwin Klein. Concrete Semantics: With Isabelle/HOL. Springer, 2014.
32
Tobias Nipkow, Lawrence C. Paulson, and Markus Wenzel. Isabelle/HOL - A Proof Assistant
for Higher-Order Logic, volume 2283 of Lecture Notes in Computer Science. Springer, 2002.
33
Andrei Popescu, Johannes Hölzl, and Tobias Nipkow. Proving concurrent noninterference. In
CPP, pages 109Ű125, 2012.
34 Andrei Popescu, Johannes Hölzl, and Tobias Nipkow. Formalizing probabilistic
noninterference. In CPP, pages 259Ű275. Springer, 2013.
35
Andrei Popescu and Peter Lammich. Bounded-deducibility security. Archive of Formal Proofs,
2014. URL: https://www.isa-afp.org/entries/Bounded_Deducibility_Security.html.
36 Andrei Popescu and Peter Lammich. CoCon: A conĄdentiality-veriĄed conference
management system. Archive of Formal Proofs, 2021. URL:
https://www.isa-afp.org/entries/CoCon.html.
37 Andrei Popescu, Peter Lammich, and Ping Hou. CoCon: A conference management system
with formally veriĄed document conĄdentiality. J. Autom. Reason., 65(2):321Ű356, 2021.
38 Markus N. Rabe, Peter Lammich, and Andrei Popescu. A shallow embedding of HyperCTL.
Archive of Formal Proofs, 2014, 2014.
39 Willard Rafnsson and Andrei Sabelfeld. Compositional information-Ćow security for
interactive systems. In CSF, pages 277Ű292, 2014.
40
Yoram Moses Ronald Fagin, Joseph Y. Halpern and Moshe Vardi. Reasoning about knowledge.
MIT Press, 2003.
41 John Rushby. Noninterference, transitivity, and channel-control security policies. Technical
report, Computer Science Laboratory SRI International, December 1992. URL:
http://www.csl.sri.com/papers/csl-92-2/.
42 Andrei Sabelfeld and Andrew C. Myers. Language-based information-Ćow security. IEEE
Journal on Selected Areas in Communications, 21(1):5Ű19, 2003.
43
Andrei Sabelfeld and David Sands. Probabilistic noninterference for multi-threaded programs.
In CSFW, pages 200Ű214, 2000.
44 Andrei Sabelfeld and David Sands. DeclassiĄcation: Dimensions and principles. Journal of
Computer Security, 17(5):517Ű548, 2009.
45 The Scalatra Web Framework, 2021. URL: http://scalatra.org/.
46
D. Sutherland. A model of information. In 9th National Security Conf., pages 175Ű183, 1986.
47 Dennis Volpano, Geoffrey Smith, and Cynthia Irvine. A sound type system for secure Ćow
analysis. Journal of Computer Security, 4(2,3):167Ű187, 1996.