Conference PaperPDF Available

A Conference Management System with Verified Document Confidentiality

Authors:

Abstract and Figures

We present a case study in verified security for realistic systems: the implementation of a conference management system, whose functional kernel is faithfully represented in the Isabelle theorem prover, where we specify and verify confidentiality properties. The various theoretical and practical challenges posed by this development led to a novel security model and verification method generally applicable to systems describable as input–output automata.
Content may be subject to copyright.
A Conference Management System with
Verified Document Confidentiality
Sudeep Kanav, Peter Lammich, and Andrei Popescu
Fakultät für Informatik, Technische Universität München, Germany
Abstract. We present a case study in verified security for realistic systems: the
implementation of a conference management system, whose functional kernel
is faithfully represented in the Isabelle theorem prover, where we specify and
verify confidentiality properties. The various theoretical and practical challenges
posed by this development led to a novel security model and verification method
generally applicable to systems describable as input–output automata.
1 Introduction
Information-flow security is concerned with preventing or facilitating (un)desired flow
of information in computer systems, covering aspects such as confidentiality, integrity,
and availability of information. Dieter Gollmann wrote in 2005 [15]: “Currently, infor-
mation flow and noninterference models are areas of research rather than the bases of
a practical methodology for the design of secure systems.” The situation has improved
somewhat in the past ten years, with mature software systems such as Jif [1] offering
powerful and scalable information flow technology integrated with programming.
However, the state of the art in information-flow security models [25] is still far from
finding its way towards applications to real-world systems. If we further restrict atten-
tion to mechanically verified work, the situation is even more dramatic, with examples
of realistic system verification [3,8,29] being brave exceptions. This is partly explained
by the complexity of information-flow properties, which is much greater than that of
traditional functional properties [24]. However, this situation is certainly undesirable,
in a world where confidentiality and secrecy raise higher and higher challenges.
In this paper, we take on the task of implementing, and verifying the confidentiality
of, a realistic system: CoCon,1a full-fledged conference system, featuring multiple
users and conferences and offering much of the functionality of widely used systems
such as EasyChair [10] and HotCRP [11].
Conference systems are widely used in the scientific community—EasyChair alone
claims one million users. Moreover, the information flow in such systems possesses
enough complexity so that errors can sneak inside implementations, sometimes with
bitter–comical consequences. Recently, Popescu, as well as the authors of 267 papers
submitted to a major security conference, initially received an acceptance notification,
followed by a retraction [19]: “We are sorry to inform you that your paper was not
accepted for this year’s conference. We received 307 submissions and only accepted 40
of them ... We apologize for an earlier acceptance notification, due to a system error.”2
1A running version of CoCon, as well as the formal proof sources, are available at [20].
2After reading the initial acceptance notification, Popescu went out to celebrate; it was only
hours later when he read the retraction.
Fig. 1: Confidentiality bug in HotCRP
The above is an information-integrity violation (a distorted decision was initially
communicated to the authors) and could have been caused by a human error rather than
a system error—but there is the question whether the system should not prevent even
such human errors. The problem with a past version of HotCRP [11] shown in Fig. 1 is
even more interesting: it describes a genuine confidentiality violation, probably stem-
ming from the logic of the system, giving the authors capabilities to read confidential
comments by the program committee (PC).
Although our methods would equally apply to integrity violations, guarding against
confidentiality violations is the focus of this verification work. We verify properties
such as the following (where DI S addresses the problem in Fig. 1):
PAP1: A group of users learn nothing about a paper unless one of them becomes an
author of that paper or a PC member at the paper’s conference
PAP2: A group of users learn nothing about a paper beyond the last submitted version
unless one of them becomes an author of that paper
REV: A group of users learn nothing about the content of a review beyond the last
submitted version before the discussion phase and the later versions unless one of
them is that review’s author
DIS: The authors learn nothing about the discussion of their paper
We will be concerned with properties restricting the information flow from the various
documents maintained by the system (papers, reviews, comments, decisions) towards
the users of the system. The restrictions refer to certain conditions (e.g., authorship, PC
membership) as well as to upper bounds (e.g., at most the last submitted version) for
information release.
We specify CoCon’s kernel using the proof assistant Isabelle [30, 31], with which
we formulate and verify confidentiality. The functional implementation of this kernel
is automatically synthesized from the specification and wrapped into a web application
offering the expected behavior of a conference system as a menu-based interface.
A first contribution of this paper is the engineering approach behind the system
specification and implementation (§2). To keep the Isabelle specification (§3) manage-
able, yet faithful to the implementation and therefore reach a decent balance between
trust and usability, we employ state-of-the-art theorem proving and code synthesis tech-
nology towards a security-preserving layered architecture.
A second contribution is a novel security model called bounded-deducibility (BD)
security, born from confronting notions from the literature with the challenges posed by
our system (§4). The result is a reusable framework, applicable to any IO automaton.
Its main novelty is wide flexibility: it allows the precise formulation of role-based and
time-based declassification triggers and of declassification upper bounds. We endow
this framework with a declassification-oriented unwinding proof technique (§5).
Our third and last contribution is the verification itself: the BD security framework,
its general unwinding theorem, and the unwinding proofs for CoCon’s confidentiality
properties expressed as instances of BD security are all mechanized in Isabelle.
2
2 Overall Architecture and Security Guarantees
Web
Application
Functional
Program
Isabelle
Specification
code generation
The architecture of our system follows the paradigm of security by design:
We formalize and verify the kernel of the system in the Isabelle proof assistant
The formalization is automatically translated in a functional programming language
The translated program is wrapped in a web application
Isabelle Specification We specify the system as an input–output automaton (Mealy
machine), with the inputs called “actions”. We first define, using Isabelle’s records, the
notions of state (holding information about users, conferences, and papers) and user
action (representing user requests for manipulating documents and rights in the system:
upload/download papers, edit reviews, assign reviewers, etc.). Then we define the step
function that takes a state and an action and returns a new state and an output.
Scala Functional Program The specification was designed to fall within the exe-
cutable fragment of Isabelle. This allows us to automatically synthesize, using Isabelle’s
code generator [17], a program in the functional fragment of Scala [2] isomorphic to
the specification. The types of data used in the specification (numbers, strings, tuples,
records) are mapped to the corresponding Scala types. An exception is the Isabelle type
of paper contents, which is mapped to the Scala/JVM file type.
Web Application Finally, the Scala program is wrapped in a web application, offering
a menu-based user interface. Upon login, a user sees his conferences and his roles for
each of them; the menus offer role-sensitive choices, e.g., assign reviewers (for chairs)
or upload papers (for authors).
Overall Security Guarantees Our Isabelle verification targets information-flow prop-
erties. These properties express that for any possible trace of the system, there is no way
to infer from certain observations on that trace (e.g., actions performed by designated
users), certain values extracted from that trace (e.g., the paper uploads by other users).
The question arises as to what guarantees we have that the properties we verified for-
mally for the specification also hold for the overall system. E.g., if we prove in Isabelle
that users never learn the content of other users’ papers, how can we be sure that this
is actually the case when using the web interface? We do not have a formal answer to
this, but only an informal argument in terms of the trustworthiness of two trusted steps.
First, we need to trust Isabelle’s code generator. Its general-purpose design is very
flexible, supporting program and data refinement [17]. In the presence of these rich
features, the code generator is only known to preserve partial correctness, hence safety
properties [16, 17]. However, here we use the code generator in a very restrictive man-
ner, to “refine” an already deterministic specification which is an implementation in its
3
own right—the code generator simply translates it from the functional language of Isa-
belle to that of Scala. In addition, all the used Isabelle functions are proved to terminate,
and nontrivial data refinement is disabled. These allow us to (informally) conclude that
the synthesized implementation is trace-isomorphic to the specification, hence the for-
mer leaks as little information as the latter. (This meta-argument does not cover timing
channels, but these seem to be of little importance for leaking document content.)
Second, we need to trust that no further leakage occurs via the web application
wrapper. To acquire this trust, we make sure that the web application acts as a stateless
interface to the step function: upon a user request, all it does is invoke “step” (one or
multiple times) with input from the user and then process and display the output of
the step function. The third-party libraries used by our web application also have to be
trusted to not be vulnerable to exploits.
In summary, the formal guarantees we provide in Isabelle have to be combined
with a few trusted steps to apply to the whole system. Our verification targets only the
system’s implementation logic—lower-level attacks such as browser-level forging are
out of its reach, but are orthogonal issues that could in principle be mitigated separately.
3 System Specification
The system behaves similarly to EasyChair [10], a popular conference system created
by Andrei Voronkov. It hosts multiple users and conferences, allowing the creation of
new users and conferences at any time. The system has a superuser, which we call
voronkov as a tribute to EasyChair. The voronkov is the first user of the system, and his
role is to approve new-conference requests. A conference goes through several phases.
No-Phase Any user can apply for a new conference, with the effect of registering it in
the system with “No-Phase”. After approval from the voronkov, the conference moves
to the setup phase, with the applicant becoming a conference chair.
Setup A conference chair can add new chairs and new regular PC members. From here
on, moving the conference to successor phases can be done by the chairs.
Submission A user can list the conferences awaiting submissions (i.e., being in sub-
mission phase). He can submit a paper, upload new versions, or indicate other users as
coauthors thereby granting them reading and editing rights.
Bidding Authors are no longer allowed to upload or register new papers and PC mem-
bers are allowed to view the submitted papers. PC members can place bids, indicating
for each paper one of the following preferences: “want to review”, “would review”, “no
preference”, “would not review”, and “conflict”. If the preference is “conflict”, the PC
member cannot be assigned that paper, and will not see its discussion. “Conflict” is
assigned automatically to papers authored by a PC member.
Reviewing Chairs can assign papers to PC members for reviewing either manually or
by invoking an external program to establish fair assignment based on some parameters:
preferences, number of papers per PC member, and number of reviewers per paper.
Discussion All PC members having no conflict with a paper can see its reviews and can
add comments. Also, chairs can edit the decision.
Notification The authors can read the reviews and the accept/reject decision, which no
one can edit any longer.
4
3.1 State, Actions, and Step Function
The state stores the lists of registered conference, user, and paper IDs and, for each
ID, actual conference, user, or paper information. Each paper ID is assigned a paper
having title, abstract, content, and, in due time, a list of reviews, a discussion text, and
a decision: Paper =String ×String ×Paper_Content ×List(Review)×Dis ×Dec
We keep different versions of the decision and of each review, as they may transpar-
ently change during discussion: Dec =List(String)and Review =List(Review_Content)
where Review_Content consists of triples (expertise, text, score).
In addition, the state stores: for each conference, the list of (IDs of) papers submitted
to that conference, the list of news updated by the chairs, and the current phase; for each
user and paper, the preferences resulted from biddings; for each user and conference, a
list of roles: chair, PC member, paper author, or paper reviewer (the last two roles also
containing paper IDs).
record State =
confIDs :List(ConfID)conf :ConfID Conf userIDs :List(UserID)
pass :UserID Pass user :UserID User roles :ConfID UserID List(Role)
paperIDs :ConfID List(PaperID)paper :PaperID Paper
pref :UserID PaperID Pref news :ConfID List(String)phase :ConfID Phase
Actions are parameterized by user IDs and passwords. There are 45 actions forming
five categories: creation, update, undestructive update (u-update), reading and listing.
The creation actions register new objects (users, conferences, chairs, PC members,
papers, authors), assign reviewers (by registering new review objects), and declare con-
flicts. E.g., cPaper cid uid pw pid title abs is an action by user uid with password pw
attempting to register to conference cid a new paper pid with indicated title and abstract.
The update actions modify the various documents of the system: user information
and password, paper content, reviewing preference, review content, etc. For example,
uPaperC cid uid pw pid ct is an attempt to upload a new version of paper pid by modi-
fying its content to ct. The u-update actions are similar, but also record the history of a
document’s versions. E.g., if a reviewer decides to change his review during the discus-
sion phase, then the previous version is still stored in the system and visible to the other
PC members (although never to the authors). Other documents subject to u-updates are
the news, the discussion, and the accept-reject decision.
The reading actions access the content of the system’s documents: papers, reviews,
comments, decisions, news. The listing actions produce lists of IDs satisfying various
filters—e.g., all conferences awaiting paper submissions, all PC members of a confer-
ence, all the papers submitted by a given user, etc.
Note that the first three categories of actions are aimed at modifying the state, and
the last two are aimed at observing the state through outputs. However, the modification
actions also produce a simple output, since they may succeed or fail. Moreover, the
observation actions can also be seen as changing the state to itself. Therefore we can
assume that both types produce a pair consisting of a new state and an action.
We define the function step :State Act Out ×State that operates by determin-
ing the type of the action and dispatching specialized handler functions. The initial state
of the system, istate State, is the one with a single user, the voronkov, and a dummy
password (which can be changed immediately). The step function and the initial state
are the only items exported by our specification to the outside world.
5
4 Security Model
Here we first analyze the literature for possible inspiration concerning a suitable secu-
rity model for our system. Then we introduce our own notion, which is an extension of
Sutherland’s nondeducibility [39] that factors in declassification triggers and bounds.
4.1 Relevant Literature
There is a vast amount of literature on information-flow security, with many variants of
formalisms and verification techniques. An important distinction is between notions that
completely forbid information flow (between designated sources and sinks) and notions
that only restrict the flow, allowing some declassification. Historically, the former were
introduced first, and the latter were subsequently introduced as generalizations.
Absence of Information Flow The information-flow security literature starts in the
late 1970s and early 1980s [7, 13, 33], motivated by the desire to express the absence
of information leaks of systems more abstractly and more precisely than by means of
access control [4, 22]. Very influential were Goguen and Meseguer’s notion of nonin-
terference [13] and its associated proof by unwinding [14]. Unwinding is essentially a
form of simulation that allows one to construct incrementally, from a perturbed trace of
the system, an alternative “corrected” trace that “closes the leak”. Many other notions
were introduced subsequently, either in specialized programming-language-based [37]
or process-algebra-based [12,36] settings or in purely semantic, event-system-based set-
tings [26,27, 32, 39]. (Here we are mostly interested in the last category.) These notions
are aimed at extending noninterference to nondeterministic systems, closing Trojan-
horse channels, or achieving compositionality. The unwinding technique has been gen-
eralized for some of these variants—McLean [28] and Mantel [24] give overviews.
Even ignoring our aimed declassification aspect, most of these notions do not ade-
quately model our properties of interest exemplified in the introduction. One problem is
that they are not flexible enough w.r.t. the observations. They state nondetectability of
absence or occurrence of certain events anywhere in a system trace. By contrast, we are
interested in a very controlled positioning of such undetectable events: in the property
PAP2from the introduction, the unauthorized user should not learn of preliminary (non-
final) uploads of a paper. Moreover, we are not interested in whole events, but rather in
certain relevant values extracted from the events: e.g., the content of the paper, and not
the ID of one of the particular authors who uploads it.
A fortunate exception to the above flexibility problems is Sutherland’s early notion
of nondeducibility [39]. One considers a set of worlds World and two functions F:
World Jand H:World K. For example, the worlds could be the valid traces of
the system, Fcould select the actions of certain users (potential attackers), and Hcould
select the actions of other users (intended as being secret). Nondeducibility of Hfrom F
says that the following holds for all wWorld: for all kin the image of H, there exists
w1World such that F w1=F w and H w1=k. Intuitively, from what the attacker
(modeled as F) knows about the actual world w, the secret actions (the value of H)
could be anything (in the image of H)—hence cannot be “deduced”. The generality of
this framework allows one to fine-tune both the location of the relevant events in the
trace and their values of interest. But generality is no free lunch: it is no longer clear
how to provide an unwinding-like incremental proof method.
6
Halpern and O’Neill [18] recast nondeducibility as a property called secrecy main-
tenance, in a multi-agent framework of “runs-and-systems” [34] based on epistemic
logic. Their formulation enables general-purpose epistemic logic primitives for deduc-
ing absence of leaks, but no unwinding or any other inductive reasoning technique.
On the practical verification side, Arapinis et al. [3] introduce ConfiChair, a con-
ference system that improves on standard systems such as EasyChair by guaranteeing
that “the cloud”, consisting of the system provider/administrator, cannot learn the con-
tent of the papers and reviews and cannot link users with their written reviews. This is
achieved by a cryptographic protocol based on key translations and mixes. They encode
the desired properties as strong secrecy (a property similar to Goguen-Meseguer nonin-
terference) and verify them using the ProVerif [5] tool specialized in security protocols.
Our work differs from theirs in three major aspects. First, they propose a cryptography-
based enhancement, while we focus on a traditional conference systems not involving
cryptography. Second, they manage to encode and verify the desired properties auto-
matically, while we use interactive theorem proving. While their automatic verification
is an impressive achievement, we cannot hope for the same with our targeted properties
which, while having a similar nature, are more nuanced and complex. E.g., proper-
ties like PAP2and REV, with such flexible indications of declassification bounds, go
far beyond strong secrecy and require interactive verification. Finally, we synthesize
functional code isomorphic to the specification, whereas they provide a separate imple-
mentation, not linked to the specification which abstracts away from many functionality
aspects.
Restriction of Information Flow A large body of work on declassification was pur-
sued in a language-based setting. Sabelfeld and Sands [38] give an overview of the state
of the art up to 2009. Although they target language-based declassification, they phrase
some generic dimensions of declassification most of which apply to our case:
What information is released? Here, document content, e.g., of papers, reviews, etc.
Where in the system is information released? In our case, the relevant “where” is
a “from where” (referring to the source, not to the exit point): from selected places
in the system trace, e.g., the last submitted version before the deadline.
When can information be released? After a certain trigger occurs, e.g., authorship.
Sabelfeld and Sands consider another interesting instance of the “where” dimension,
namely intransitive noninterference [23, 35]. This is an extension of noninterference
that allows downgrading of information, say, from High to Low, via a controlled De-
classifier level. It could be possible to encode aspects of our properties of interest as
intransitive noninterference—e.g., we could encode the act of a user becoming an au-
thor as a declassifying action for the target paper. However, such an encoding would be
rather technical and somewhat artificial for our system; additionally, it is not clear how
to factor in our aforementioned specific “where” dimension.
Recently, the “when” aspect of declassification has been included as first-class cit-
izen in customized temporal logics [6, 9], which can express aspects of our desired
properties, e.g., “unless/until he becomes an author”. Their work is focused on effi-
ciently model-checking finite systems, whereas we are interested in verifying an infinite
system. Combining model checking with infinite-to-finite abstraction is an interesting
prospect, but reflecting information-flow security properties under abstraction is diffi-
cult problem.
7
4.2 Bounded-Deducibility Security
We introduce a novel notion of information-flow security that:
retains the precision and versatility of nondeducibility
factors in declassification as required by our motivating examples
is amenable to a general unwinding technique
We shall formulate security in general, not only for our concrete system from §3.1,
but for any IO automaton indicated by the following data. We fix sets of states, State,
actions, Act, and outputs, Out, an initial state istate State, and a step function step :
State Act Out ×State. We let Trans, the set of transitions, be State ×Act ×
Out ×State. Thus, a transition trn is a tuple, written (s,a,o,s0);sindicates the source,
athe action, othe output, and s0the target. trn is called valid if it is induced by the step
function, namely step s a = (o,s0).
Atrace tr Trace is any list of transitions: Trace =List (Trans). For any sState,
the set of valid traces starting in s,ValidsTrace, consists of the traces of the form
[(s1,a1,o1,s2),(s2,a2,o2,s3), . . . , (sn1,an1,on,sn)] for some nwhere s1=sand
each transition (si,ai,oi,si)is valid. We will be interested in the valid traces starting in
the initial state istate—we simply call these valid traces and write Valid for Validistate.
Besides the IO automaton, we assume that we are given the following data:
avalue domain Val, together with a value filter ϕ:Trans Bool and a value pro-
ducer f:Trans Val
an observation domain Obs, together with an observation filter γ:Trans Bool
and an observation producer g:Trans Obs
We define the value function V:Trace List(Val)componentwise, filtering out values
not satisfying ϕand applying f:
V[] [] V([trn]·tr)if ϕtrn then (ftrn)·(Vtr)else Vtr
We also define the observation function O:Trace List(Obs)just like V, but using γ
as a filter and gas a producer.
We think of the above as an instantiation of the abstract framework for nondeducibil-
ity recalled in §4.1, where World =Valid,F=O, and H=V. Thus, nondeducibility
states that the observer Omay learn nothing about V. Here we are concerned with a
more fine-grained analysis, asking ourselves what may the observer Olearn about V.
Using the idea underlying nondeducibility, we can answer this precisely: Given a
trace tr Valid, the observer sees Otr and therefore can infer that Vtr belongs to the
set of all values Vtr1for some tr1Valid such that Otr1=Otr. In other words, he
can infer that the value is in the set V(O1(Otr)Valid), and nothing beyond this.
We call this set the declassification associated to tr, written Dectr.
We want to establish, under certain conditions, upper bounds for declassification,
or in set-theoretic terms, lower bounds for Dectr. To this end, we further fix:
a declassification bound B:List(Val)List(Val)Bool
a declassification trigger T:Trans Bool
The system is called bounded-deducibility-secure (BD-secure) if for all tr Trace such
that never T tr, it holds that {vl1|B(Vtr)vl1} ⊆ Dectr (where “never T tr” means “T
holds for no transition in tr”). Informally, BD security expresses the following:
8
If the trigger Tnever holds (i.e., unless Teventually holds, i.e., until Tholds),
the observer Ocan learn nothing about the values Vbeyond B
We can think of Bpositively, as an upper bound for declassification, or negatively, as
a lower bound for uncertainty. On the other hand, Tis a trigger removing the bound B
as soon as Tbecomes true, the containment of declassification is no longer guaranteed.
In the extreme case of Bbeing everywhere true and Teverywhere false, we have no
declassification, i.e., total uncertainty—in other words, standard nondeducibility.
Unfolding some definitions, we can alternatively express BD security as the follow-
ing being true for all tr Valid and vl,vl1List(Val):
never T tr Vtr =vl Bvl vl1(tr1Valid.Otr1=Otr Vtr1=vl1)()
4.3 Discussion
BD security is a natural extension of nondeducibility. If one considers the latter as
reasonably expressing the absence of information leak, then one is likely to accept the
former as a reasonable means to indicate bounds on the leak. Unlike previous notions
in the literature, BD security allows to express the bounds as precisely as desired.
As an extension of nondeducibility, BD security is subject to the same criticism.
The problem with nondeducibility [26,28, 36] is that in some cases it is too weak, since
it takes as plausible each possible explanation for an observation: if the observation se-
quence is ol, then any trace tr such that Otr =vl is plausible. But what if the low-level
observers can synchronize their actions and observations with the actions of other enti-
ties, such as a high-level user or a Trojan horse acting on his behalf, or even a third-party
entity that is neither high nor low? Even without synchronization, the low-level observer
may learn from outside the system, of certain behavior patterns of the high-level users.
Then the set of plausible explanations can be reduced, leading to information leak.
In our case, the low-level observers are a group of users assumed to never acquire
a certain status (e.g., authorship of a paper). The other users of the system are either
“high-level” (e.g., the authors of the paper) or “third-party” (e.g., the non-author users
not in the group of observers). Concerning the high-level users, it does not make sense
to assume that they would cooperate to leak information through the system, since they
certainly have better means to do that outside the system, e.g., via email. Users also
do not have to trust external software, since everything is filtered through the system
kernel—e.g., a chair can run an external linear-programming tool for assigning review-
ers, but each assignment is still done through the verified step function. As for the
possible third-party cooperation towards leaks of information, this is bypassed by our
consideration of arbitrary groups of observers: in the worst case, all the unauthorized
users can be placed in this group. However, the possibility to learn and exploit behavior
patterns from outside the system is not explicitly addressed by BD security—it would
be best dealt with by a probabilistic analysis.
4.4 Instantiation to Our Running Examples
Recall that BD security involves the following parameters:
an IO automaton (State,Act,Out,istate,step)
infrastructures for values (Val, ϕ, f)and observations (Obs, γ, g)
a declassification specification: trigger Tand bound B
9
In particular, this applies to our conference system automaton. BD security then cap-
tures our examples by suitably instantiating the observation and declassification param-
eters. For all our examples, we have the same observation infrastructure. We fix UIDs,
the set of IDs of the observing users. We let Obs =Act ×Out. Given a transition, γ
holds iff the action’s subject is a user in UIDs, and greturns the pair (action,output).
Otr thus purges tr keeping only actions of users in UIDs.
The value infrastructure depends on the considered type of document. For PAP1and
PAP2we fix PID, the ID of the paper of interest. We let Val =List(Paper_Content).
Given a transition, ϕholds iff the action is an upload of paper PID, and freturns the
uploaded content. Vtr thus returns the list of all uploaded paper contents for PID.
The declassification triggers and bounds are specific to each example. For PAP1,
we define T(s,a,o,s0)as “in state s0, some user in UIDs is an author of PID or a PC
member of some conference cid where PID is registered,” formally:
uid UIDs.isAut s0uid PID (cid.PID paperIDs s0cid isPC s0uid cid)
Intuitively, the intent with PAP1is that, provided Tnever holds, users in UIDs learn
nothing about the various consecutive versions of PID. But is it true that they can learn
absolutely nothing? There is a remote possibility that a user could infer that no version
was submitted (by probing the current phases of the conferences in the system and
noticing that none has reached the submission phase). But indeed, nothing beyond this
quite harmless information should leak: any nonempty value sequence vl might as well
have been any other (possibly empty!) sequence vl1. Hence we define Bvl vl1as vl 6= [].
For PAP2, the trigger involves only authorship, ignoring PC membership at the pa-
per’s conference—we take T(s,a,o,s0)to be uid UIDs.isAut s0uid PID. Here we
have a genuine example of nontrivial declassification bound—since a PC member can
learn the paper’s content but only as its last submitted version, we define Bvl vl1as
vl 6= [] 6=vl1last vl =last vl1, where the function last returns the last element of a list.
For REV, the value infrastructure refers not only to the review’s content but also to
the conference phase: Val =List (Phase×Review_Content). The functions ϕand fare
defined similarly to those for paper contents, mutatis mutandis; in particular, freturns
a pair (ph,rct)consisting of the conference’s current phase and the updated review’s
content; hence Vreturns a list of such pairs. The trigger Tis similar to that of PAP2but
refers to review authorship rather than paper authorship. The bound Bis more complex.
Any user can infer that the only possiblities for the phase are Reviewing and Discussion,
in this order—i.e., that vl has the form ul ·wl such that the pairs in ul have Reviewing as
first component and the pairs in wl have Discussion. Moreover, any PC member having
no conflict with PID can learn last ul (the last submitted version before Discussion),
and wl (the versions updated during Discussion, public to non-conflict PC members);
but (until Tholds) nothing beyond these. So Bvl vl1states that vl decomposes as ul ·wl
as indicated above, vl1decomposes similarly as ul1·wl, and last ul =last ul1.
DIS needs rephrasing to be captured as BD security. It can be decomposed into:
DIS1: An author always has conflict with his papers
DIS2: A group of users learn nothing about a paper’s discussion unless one of them
becomes a PC member at the paper’s conference having no conflict with the paper
DIS1is a safety property. DIS2is an instance of BD security defined as expected.
10
Source Declassification Trigger Declassification Bound
1 Paper Content Paper Authorship Last Uploaded Version
2Paper Authorship or PC MembershipBAbsence of Any Upload
3 Review Review Authorship
Last Edited Version
Before Discussion and
All the Later Versions
4Review Authorship or
Non-Conflict PC MembershipD
Last Edited Version
Before Notification
5
Review Authorship or
Non-Conflict PC MembershipDor
PC MembershipNor Paper AuthorshipN
Absence of Any Edit
6 Discussion Non-Conflict PC Membership Absence of Any Edit
7 Decision Non-Conflict PC Membership Last Edited Version
8Non-Conflict PC Membership or
PC MembershipNor Paper AuthorshipNAbsence of Any Edit
9Reviewer
Assignment Non-Conflict PC MembershipRNon-Conflict PC Membership
of Reviewers and No. of Reviews
10 Non-Conflict PC MembershipRor
Paper AuthorshipN
Non-Conflict PC Membership
of Reviewers
Phase Stamps: B= Bidding, D= Discussion, N= Notification, R= Review
4.5 More Instances
The above table shows an array of confidentiality properties formulated as BD security.
They provide a classification of relevant roles, statuses and conference phases that are
necessary conditions for degrees of information release. The observation infrastructure
is always the same, given by the actions and outputs of a fixed group of users as in §4.4.
The table lists several information sources, each yielding a different value infras-
tructure. In rows 1–8, the sources are actual documents: paper content, review, discus-
sion, decision. The properties PAP1, PAP2, REV and DIS2form the rows 2, 1, 3, and 6.
In rows 9 and 10, the sources are the identities of the reviewers assigned to the paper.
The declassification triggers express paper or review authorship (being or becom-
ing an author of the indicated document) or PC membership at the paper’s conference,
with or without the requirement of lack of conflict with the paper. Some triggers are
also listed with “phase stamps” that strengthens the statements. E.g., row 2 contains a
strengthening of the trigger discussed so far for PAP1: “PC membershipB” should be
read as “PC membership and paper’s conference phase being at least bidding.” Some of
the triggers require lack of conflict with the paper, which is often important for the se-
curity statement to be strong enough. This is the case of DI S2(row 6), since without the
non-conflict assumption DI S2and D I S1would no longer imply DIS. By contrast, lack
of conflict cannot be added to PC membership in PAP1(row 2), since such a stronger
version would not hold: even if a PC member decides to indicate conflict with a paper,
this happens after he had the opportunity to see the paper’s content.
Most of the declassification bounds are similar to those from §4.4. The row 10 prop-
erty states that, unless one becomes a PC member having no conflict with a paper in the
reviewing phase or a paper’s author in the notification phase, one can’t learn anything
about the paper’s assigned reviewers beyond what everyone knows: that reviewers are
non-conflict PC members. If we remove the non-authorship restriction, then the user
may also infer the number of reviewers—but, as row 9 states, nothing beyond this.
11
5 Verification
To cope with general declassification bounds, BD security speaks about system traces
in conjunction with value sequences that must be produced by these traces. We extend
the unwinding proof technique to this situation and employ the result to the verification
of our confidentiality properties.
5.1 Unwinding Proof Method
We see from definition ()that to prove BD security, one starts with a valid tr (starting
in sand having value sequence vl) and an “alternative” value sequence vl1such that
Bvl vl1and one needs to produce an “alternative” trace tr1starting in swhose value
sequence is vl1and whose observation sequence is the same as that of tr.
In the tradition of unwinding for noninterference [14, 35], we wish to construct tr1
from tr incrementally: as tr grows, tr1should grow nearly synchronously. In order for
tr1to have the same observation sequence (produced by O) as tr, we need to require
that the observable transitions of tr1(i.e., for which γholds) be identical to those of tr.
As for the value sequences (produced by V), we face the following problem. In
contrast to the unwinding relations studied so far in the literature, we must consider
an additional parameter, namely the a priori given value sequence vl1that needs to be
produced by tr1. In fact, it appears that one would need to maintain, besides an unwind-
ing relation on states θ:State State Bool, also an “evolving” generalization of
the declassification trigger B; then θand Bwould certainly need to be synchronized.
We resolve this by enlarging the domain of the unwindings to quaternary relations
:State List(Val)State List(Val)Bool that generalize both θand B. Intu-
itively, svl s1vl1keeps track of the current state of tr, the remaining value sequence
of tr, the current state of tr1, and the remaining value sequence of tr1.
Let the predicate consume trn vl vl0mean that the transition trn either produces a
value that is consumed from vl yielding vl0or produces no value and vl =vl0. Formally:
if ϕtrn then (vl 6= [] ftrn =head vl vl0=tail vl)else (vl0=vl)
In light of the above discussion, we are tempted to define an unwinding as a relation
such that svl s1vl1implies either of the following conditions:
REAC TIO N: For any valid transition (s,a,o,s0)and lists of values vl,vl0such that
consume (s,a,o,s0)vl vl0holds, either of the following holds:
IGNORE: The transition yields no observation (¬γao) and s0vl0s1vl1holds
MATCH: There exist a valid transition (s1,a1,o1,s0
1)and a list of values vl0
1
such that consume (s,a,o,s0)vl1vl0
1and s1vl0s0
1vl0
1hold
INDEPENDENT ACTION: There exist a valid transition (s1,a1,o1,s0
1)that yields
no observation (¬γa1o1) and a list of values vl0
1such that consume a1o1vl1vl0
1
and svl s0
1vl0
1hold
The intent is that BD security should hold if there exists an unwinding that “initially
includes” B. A trace tr1could then be constructed incrementally from tr,vl and vl1,
applying RE ACTI ON or INDEPENDENT ACTION until the three lists become empty.
Progress However, such an argument faces difficulties. First, INDEPENDENT ACTION
is not guaranteed to decrease any of the lists. To address this, we strengthen I N DE P EN -
DE NT ACTION by adding the requirement that ϕ(s1,a1,o1,s0
1)holds—this ensures
12
that vl1decreases (i.e., vl0
1is strictly shorter then vl1). This way, we know that each R E-
ACT ION and I NDEPENDENT ACTION decreases at least one list: the former tr and the
latter vl1; and since vl is empty whenever tr is, the progress problem seems resolved.
Yet, there is a second, more subtle difficulty: after tr has become empty, how can
we know that vl1will start decreasing? With the restrictions so far, one may still choose
REAC TIO N with parameters that leave vl1unaffected. So we need to make sure that the
following implication holds: if tr = [] and vl16= [], then vl1will be consumed. Since
from inside the unwinding relation we cannot (and do not want to!) see tr, but only
vl, we weaken the assumption of this implication to “if vl = [] and vl16= [];” more-
over, we strengthen its conclusion to requiring that only the INDEPENDENT ACTION
choice (guaranteed to shorten vl1) be available. Equivalently, we condition the alterna-
tive choice of REACT I ON by the negation of the above, namely vl 6= [] vl1= [].
Exit Condition The third observation is not concerned with a difficulty, but with an
optimization. We note that BD security holds trivially if the original trace tr cannot
saturate the value list vl, i.e., if Vtr 6=vl—this happens if and only if, at some point, an
element vof vl can no longer be saturated, i.e., for some decompositions tr =tr0·tr00
and vl =vl0·[v]·vl00 of tr and vl, it holds that Vtr0=vl0and trn tr00 . ϕ trn ftrn 6=v.
Can we detect such a situation from within ? The answer is (an over-approximated)
yes: after svl s1vl1evolves by REACT ION and INDEPENDENT ACTION to s0([v]·
vl00)s0
1vl0
1for some s0,s0
1and vl0
1(presumably consuming tr0and saturating the vl0prefix
of vl), then one can safely exit the game if one proves that no valid trace tr00 starting
from s0can ever saturate v, in that it satisfies trn tr00. ϕ trn ftrn 6=v.
The final definition of BD unwinding is given below, where reach :State Bool
is the state reachability predicate and reach ¬T:State Bool is its strengthening to
reachability by transitions that do not satisfy T:
unwind ≡ ∀svl s1vl1.reach ¬Tsreach s1svl s1vl1
((vl 6= [] vl1= []) reaction s s vl s1vl1)
iaction s s vl s1vl1
(vl 6= [] exit s(head vl))
The predicates iaction and reaction formalize INDEPENDENT ACTION (with its afore-
mentioned strengthening) and REACTION, the latter being a disjunction of predicates
formalizing IGNORE and MATCH. The predicate exit s v is defined as tr trn.(tr ·
[trn]) Validsϕtrn ftrn 6=v. It expresses a safety property, and therefore can be
verified in a trace-free manner. We can now prove that indeed any unwinding relation
constructs an “alternative” trace tr1from any trace tr starting in a P-reachable state:
Lemma. unwind reach ¬Tsreach s1svl s1vl1tr Validsnever T tr
Vtr =vl (tr1.tr1Valids1Otr1=Otr Vtr1=vl1)
Unwinding Theorem. If unwind and vl vl1.Bvl vl1istate vl istate vl1, then
the system is BD-secure.
Proof ideas. The lemma follows by induction on length tr +length vl1(as discussed
above about progress). The theorem follows from the lemma taking s1=s=istate.
According to the theorem, BD unwinding is a sound proof method for BD security:
to check BD security it suffices to define a relation and prove that it coincides with B
on the initial state and that it is a BD unwinding.
13
1
9
9
9
9
2
9
9
9
93
gg
4
OO
Fig. 3: A network of unwinding components
1
&&
L
L
L
L
L
L
L
L
L
L//2
5
5
5
5
5//... //n
xxr
r
r
r
r
r
r
r
r
r
e
Exit
Fig. 4: A linear network with exit
5.2 Compositional Reasoning
To keep each reasoning step manageable, it is convenient to allow decomposing the
single unwinding relation into relations 1, . . . , n. Unlike , a component imay
unwind not only to itself but to any combination of js. Technically, we define the
predicate unwind_to just like unwind but taking two arguments instead of one: a first
relation and a second relation to which the first one unwinds. We replace the single
requirement unwind with a set of requirements unwind_to i(disj (next i)), where
next iis a chosen subset of {1, . . . , n}and disj takes the disjunction of a set of predi-
cates. This enables a form of sound compositional reasoning: if we verify a condition as
above for each component i, we obtain an overall unwinding relation disj {1, . . . , ∆n}.
The network of components can form any directed graph —Fig. 3 shows an exam-
ple. However, our unwinding proofs will be phase-directed, and hence the following
linear network will suffice (Fig. 4): each iunwinds either to itself, or to i+1(if i6=n),
or to an exit component ethat invariably chooses the “exit” unwinding condition. For
the first component, 1, we need to verify that it extends Bon the initial state.
5.3 Verification of Concrete Instances
We have verified all the BD security instances listed in §4.5. For each of them we
defined a suitable chain of unwinding components ias in Fig. 4.
Recall from the definition of BD security that one needs to construct an alterna-
tive trace tr1(which produces the value sequence vl1) from the original trace tr (which
produces the value sequence vl). A chain of is witnesses the strategy for such a con-
struction, although it does not record the whole traces tr1and tr but only the states they
have reached so far, sand s1. The separation between i’s is guided by milestones in
the journey of tr and tr1, such as: a paper’s registration to a conference, conference
phases, the registration of a relevant agent like a chair, a non-conflicted PC member, or
a reviewer. E.g., Fig. 5 shows the unwinding components in the proof of PAP2, where
Bvl vl1is the declassification bound (vl 6= [] 6=vl1last vl =last vl1) and the changes
from ito i+1are emphasized.
Each property has one or more critical phases, the only phases when vl and vl1can
be produced. E.g., for PAP2, paper uploading is only available in Submission (while for
REV, there is an update action in Reviewing, and an u-update one in Discussion). Until
those phases, tr1proceeds synchronously to tr taking the same actions—consequently,
the states sand s1are equal in 1. In the critical phases, the traces tr and tr1will diverge,
due to the need of producing different (but B-related) value sequences. As a result, the
equality between sand s1is replaced with the weaker relation of equality everywhere
14
1svl s1vl1¬(cid.PID paperIDs scid)s=s1Bvl vl1
2svl s1vl1(cid.PID paperIDs scid phase scid =Submission )s=PID s1Bvl vl1
3svl s1vl1(cid.PID paperIDs scid)s=s1vl =vl1= []
esvl s1vl1(cid.PID paperIDs scid phase scid >Submission)vl 6= []
Fig. 5: The unwinding components for the proof of PAP2
except on certain components of the state, e.g., the content of a given paper (written
=PID for PAP2), or of a given review, or of the previous versions of a given review, etc.
At the end of the critical phases, tr1will usually need to resynchronize with tr and
hereafter proceed with identical actions. Consequently, sand s1will become connected
by a stronger “equality everywhere except” relation or even plain equality again. The
smooth transition between consecutive components iand i+1that impose different
state equalities is ensured by a suitable INDEPENDENT-ACTION/REACTION strategy.
For PAP2, such a strategy for transitioning from 2to 3(with emptying vl and vl1at
the same time) is the following: by INDEPENDENT ACTION,tr1will produce all values
in vl1save for the last one, which will be produced by REACTI ON in sync with tr when
tr reaches the last value in vl; this is possible since Bguarantees last vl =last vl1. The
exit component ewitnesses situations (s,vl)not producible from any system trace tr
in order to exclude them via Exit. For PAP2, such a situation is the paper’s conference
phase exceeding Submission with values vl still to be produced. eis reached from 2
when a change-phase action occurs.
Several safety properties are needed in the unwinding proofs. For PAP2, we use that
there is at most one conference to which a paper can be registered—this ensures that no
value can be produced (i.e., ϕ(head vl)does not hold) from within 1or 2, since no
paper upload is possible without prior registration.
The verification took us two person months, during which we also developed reusable
proof infrastructure and automation. Eventually, we could prove the auxiliary safety
properties automatically. The unwinding proofs still required some interaction for in-
dicating the INDEPENDENT-ACTION/REACT ION strategy—we are currently exploring
the prospect of fully automating the strategy part too, based on a suitable security-
preserving abstraction in conjunction with an external model checker.
Conclusion Most of the information-flow security models proposed by theoreticians
have not been confronted with the complexity of a realistic application, and therefore
fail to address, or abstract away from, important aspects of the conditions for infor-
mation release or restraint. In our verification case study, we approached the problem
bottom-up: we faithfully formalized a realistic system, on which we identified, for-
mulated and verified confidentiality properties. This experience led to the design of a
flexible verification infrastructure for restricted information flow in IO automata.
Acknowledgement. Tobias Nipkow encouraged us to pursue this work. Several people made
helpful comments and/or indicated related work: the CAV reviewers, Jasmin Blanchette, Manuel
Eberl, Lars Hupel, Fabian Immler, Steffen Lortz, Giuliano Losa, Tobias Nipkow, Benedikt Nord-
hoff, Martin Ochoa, Markus Rabe, and Dmitriy Traytel. The research was supported by the DFG
project Security Type Systems and Deduction (grant Ni491/13-2), part of Reliably Secure Soft-
ware Systems (RS3). The authors are listed in alphabetical order.
15
References
1. Jif: Java +information flow, 2014. http://www.cs.cornell.edu/jif.
2. The Scala Programming Language, 2014. http://www.scala- lang.org.
3. M. Arapinis, S. Bursuc, and M. Ryan. Privacy supporting cloud computing: Confichair, a
case study. In POST, pp. 89–108, 2012.
4. E. D. Bell and J. L. La Padula. Secure computer system: Unified exposition and multics
interpretation. Technical Report MTR-2997, MITRE, Bedford, MA.
5. B. Blanchet, M. Abadi, and C. Fournet. Automated verification of selected equivalences for
security protocols. In LICS, pp. 331–340, 2005.
6. M. R. Clarkson, B. Finkbeiner, M. Koleini, K. K. Micinski, M. N. Rabe, and C. Sánchez.
Temporal logics for hyperproperties. In POST, pp. 265–284, 2014.
7. E. S. Cohen. Information transmission in computational systems. In SOSP, pp. 133–139,
1977.
8. A. A. de Amorim, N. Collins, A. DeHon, D. Demange, C. Hritcu, D. Pichardie, B. C.
Pierce, R. Pollack, and A. Tolmach. A verified information-flow architecture. In POPL,
pp. 165–178, 2014.
9. R. Dimitrova, B. Finkbeiner, M. Kovács, M. N. Rabe, and H. Seidl. Model checking
information flow in reactive systems. In VMCAI, pp. 169–185, 2012.
10. The EasyChair conference system, 2014. http://easychair.org.
11. The HotCRP conference management system, 2014.
http://read.seas.harvard.edu/~kohler/hotcrp.
12. R. Focardi and R. Gorrieri. Classification of security properties (part i: Information flow).
In FOSAD, pp. 331–396, 2000.
13. J. A. Goguen and J. Meseguer. Security policies and security models. In IEEE Symposium
on Security and Privacy, pp. 11–20, 1982.
14. J. A. Goguen and J. Meseguer. Unwinding and inference control. In IEEE Symposium on
Security and Privacy, pp. 75–87, 1984.
15. D. Gollmann. Computer Security. Wiley, 2nd ed., 2005.
16. F. Haftmann. Code Generation from Specifications in Higher-Order Logic. Ph.D. thesis,
Technische Universität München, 2009.
17. F. Haftmann and T. Nipkow. Code generation via higher-order rewrite systems. In FLOPS
2010, pp. 103–117, 2010.
18. J. Y. Halpern and K. R. O’Neill. Secrecy in multiagent systems. ACM Trans. Inf. Syst.
Secur., 12(1), 2008.
19. IEEE Symposium on Security and Privacy. Email notification, 2012.
20. S. Kanav, P. Lammich, and A. Popescu. The CoCon website.
http://www21.in.tum.de/~popescua/rs3/GNE.html.
21. S. Kanav, P. Lammich, and A. Popescu. Supplementary material associated with this paper.
http://www21.in.tum.de/~popescua/cav2014_suppl.zip, 2014.
22. B. W. Lampson. Protection. Operating Systems Review, 8(1):18–24, 1974.
23. H. Mantel. Information flow control and applications - bridging a gap. In FME,
pp. 153–172, 2001.
24. H. Mantel. A Uniform Framework for the Formal Specification and Verification of
Information Flow Security. PhD thesis, University of Saarbrücken, 2003.
25. H. Mantel. Information flow and noninterference. In Encyclopedia of Cryptography and
Security (2nd Ed.), pp. 605–607. 2011.
26. D. McCullough. Specifications for multi-level security and a hook-up property. In IEEE
Symposium on Security and Privacy, 1987.
16
27. J. McLean. A general theory of composition for trace sets closed under selective
interleaving functions. In In Proc. IEEE Symposium on Security and Privacy, pp. 79–93,
1994.
28. J. McLean. Security models. In Encyclopedia of Software Engineering, 1994.
29. T. C. Murray, D. Matichuk, M. Brassil, P. Gammie, and G. Klein. Noninterference for
operating system kernels. In CPP, pp. 126–142, 2012.
30. T. Nipkow and G. Klein. Concrete Semantics. A Proof Assistant Approach. forthcoming.
310 pp. http://www.in.tum.de/~nipkow/Concrete-Semantics.
31. T. Nipkow, L. C. Paulson, and M. Wenzel. Isabelle/HOL: A Proof Assistant for
Higher-Order Logic, vol. 2283 of LNCS. Springer, 2002.
32. C. O’Halloran. A calculus of information flow. In ESORICS, pp. 147–159, 1990.
33. G. J. Popek and D. A. Farber. A model for verification of data security in operating
systems. Commun. ACM, 21(9):737–749, 1978.
34. Y. M. Ronald Fagin, Joseph Y. Halpern and M. Vardi. Reasoning about knowledge. MIT
Press, 2003.
35. J. Rushby. Noninterference, transitivity, and channel-control security policies. Tech. report,
dec 1992.
36. P. Y. A. Ryan. Mathematical models of computer security. In FOSAD, pp. 1–62, 2000.
37. A. Sabelfeld and A. C. Myers. Language-based information-flow security. IEEE Journal on
Selected Areas in Communications, 21(1):5–19, 2003.
38. A. Sabelfeld and D. Sands. Declassification: Dimensions and principles. Journal of
Computer Security, 17(5):517–548, 2009.
39. D. Sutherland. A model of information. In 9th National Security Conference, pp. 175–183,
1986.
17
Appendix
This appendix gives more details on the formal specification of the system (§A), on the
unwinding proof method for BD security (§B), on the formulation of confidentiality
properties as instances of BD security (§C) and their unwinding proofs (§D), as well as
on some safety (§E) and “forensic” (§F) properties we have proved as complements or
auxiliaries to BD security.
A More Details on the Specification
Here we present selected concrete aspects of the formalization for the reader who does
not want to inspect our formal Isabelle scripts [21].
A.1 The Roles
Each user is assigned a set of roles for each conference.
datatype Role =PC |Chair |Aut PaperID |Rev PaperID Nat
PC The user is member of the program committee
Chair The user is a chair of the conference
Aut pid The user is auther of the paper with id pid
Rev pid n The user is the nth reviewer of the paper with id pid
A.2 Initial State
In the initial state of the system, there exists only a single user, which is the superuser
(voronkov), and there are no conferences yet.
istate =
confIDs = [] conf = (λcid.emptyConf)
userIDs = [“voronkov”]pass = (λuid.emptyPass)
user = (λuid.emptyUser)roles = (λcid uid.[])
paperIDs = (λcid.[]) paper = (λpid.emptyPaper)
pref = (λuid pid.NoPref)voronkov =“voronkov”
news = (λcid.[]) phase = (λcid.noPh)
A.3 Outputs
The output contains some generic output types, like outOK for a successful action with
no other output and outErr for a failed action. Moreover, outputs for various datatypes
are defined:
datatype Out =
outOK |outErr |outBool Bool |outSTRP String ×String |outSTRL List(String)|
outCONF String ×String ×List(Role)×Phase |outPREF Pref |outCON Paper_Content |
outNREV Nat ×Review |outREVL List(Review)|outRREVL List(UserID ×Review)|
outDEC Decision |outDECL List(Decision)|outCIDL List(ConfID)|outUIDL List(UserID)|
outPIDL List(PaperID)|outSTRPAL String ×String ×List(UserID)
18
A.4 Actions
The actions are defined as an Isabelle datatype that distinguish between five categories
of actions: creation, update, undestructive update, reading, and listing.
datatype Act =Cact cAct |Uact uAct |UUact uuAct |Ract rAct |Lact lAct
Each category is further refined within its own datatype, based on the object of the
action: user, paper, PC member, author, conflict, review, phase, paper title and abstract,
paper content, preference, news, discussion, (the changing list of) decisions, final deci-
sion, etc.
datatype cAct =
cUser UserID Pass String String |cUser ConfID UserID Pass String String
|cPC ConfID UserID Pass UserID |cChair ConfID UserID Pass UserID
|cPaper ConfID UserID Pass PaperID String String
|cAuthor ConfID UserID Pass PaperID UserID
|cConflict ConfID UserID Pass PaperID UserID
|cReview ConfID UserID Pass PaperID UserID
datatype uAct =
uUser UserID Pass Pass String String |uConfA ConfID UserID Pass
|uPhase ConfID UserID Pass Phase
|uPaperTA ConfID UserID Pass PaperID StringString
|uPaperC ConfID UserID Pass PaperID Paper_Content
|uPref ConfID UserID Pass PaperID Pref
|uReview ConfID UserID Pass PaperID Nat Review_Content
datatype uuAct =
uuNews ConfID UserID Pass String |uuDis ConfID UserID Pass PaperID String
|uuReview ConfID UserID Pass PaperID Nat Review_Content
|uuDec ConfID UserID Pass PaperID Decision
datatype rAct =
rAmIVoronkov UserID Pass |rUser UserID Pass UserID
|rConf ConfID UserID Pass |rNews ConfID UserID Pass
|rPaperNIA ConfID UserID Pass PaperID |rPaperC ConfID UserID Pass PaperID
|rPref ConfID UserID Pass PaperID |rMyReview ConfID UserID Pass PaperID
|rReviews ConfID UserID Pass PaperID |rDecs ConfID UserID Pass PaperID
|rDis ConfID UserID Pass PaperID |rFinalReviews ConfID UserID Pass PaperID
|rFinalDec ConfID UserID Pass PaperID
|rPrefOfPC ConfID UserID Pass PaperID UserID
datatype lAct =
lConfsUserID Pass |lAConfs UserID Pass |lSConfs UserID Pass
|lMyConfs UserID Pass |lAllUsers UserID Pass
|lAllPapers UserID Pass |lPC ConfID UserID Pass
|lChair ConfID UserID Pass |lPapers ConfID UserID Pass
|lMyPapers ConfID UserID Pass |lMyAssignedPapers ConfID UserID Pass
|lAssignedReviewers ConfID UserID Pass PaperID
19
A.5 Step Function
Next we illustrate the definition of the step function by zooming into one of its subcases.
step s a
case aof Cact ca case ca of
cAuthor cid uid pw pid uid0
if e_createAuthor scid uid pw pid uid0
then (outOK,createAuthor scid uid pw pid uid0)
else (outErr,s))
|cUser cid uid pw name abs ...
...
|Uact ua ...
|UUact uua ...
|Ract ra ...
|Lact la ...
Above, we only showed one subcase of the creation-action case in full. The semantics
of each type of action (e.g., cAuthor, which is itself a subtype of creation actions) has an
associated test for enabledness (e.g., e_createAuthor) and effect (e.g., createAuthor).
The effect is only applied if the action is enabled; otherwise an error output is issued.
The enabledness test checks if the user is allowed to perform the requested action: if
his password matches the user ID, if the conference phase is appropriate, if he has an
appropriate role, etc. For example, with the action cAuthor scid uid pw pid uid0, the
user uid attempts to create (i.e., add) an author uid0to an existing paper pid, as well as
a conflict in the system database between the author and the paper. Its enabledness test
checks the following:
the invoked IDs exist in the system;
pis the correct password of uid;
cid is in the submission phase;
uid is an author of pid, and pid is associated to the conference cid;
uid0is different from uid.
In our formalization, this reads as:
e_createAuthor scid uid pw pid uid0
IDsOK s[cid] [uid,uid0][pid]pass suid =p
phase scid =Submission isAut suid pid uid 6=uid0
createAuthor scid uid pw pid uid0
let rls =roles scid uid0in
s(roles :=fun_upd2(roles s)cid uid0(insert (Aut pid)rls),
pref :=fun_upd2(pref s)uid0pid Conflict)
20
B More Details on the Unwinding Theorems
Here we show the formal definitions of the unwinding predicates and the statements of
the unwinding theorem variations targeting compositional reasoning.
B.1 Formal Definition of the Unwinding Predicates
Below are the formal definitions of iaction and reaction, including those of the reaction
components ignore and match:
iaction svl s1vl1
a1o1s0
1vl0
1.let trn1= (s1,a1,o1,s0
1)in
validTrans trn1consume trn1vl1vl0
1ϕtrn1 ¬ γtrn1svl s0
1vl0
1
reaction svl s1vl1≡ ∀aos0.let trn = (s,a,o,s0)in
validTrans trn ¬ Ttrn consume trn vl vl0
match s s1vl1aos0vl0ignore s s1vl1aos0vl0
where:
ignore s s1vl1aos0vl0≡ ¬ γ(s,a,o,s0)s0vl0s1vl1
match s s1vl1aos0vl0
a1o1s0
1vl0
1.let trn = (s,a,o,s0)and trn1= (s1,a1,o1,s0
1)in
validTrans trn1consume trn1vl1vl0
1(γtrn γtrn1)
(γtrn gtrn =gtrn1)s0vl0s0
1vl0
1
B.2 Verifiability of the exit Condition in a Trace-Free Manner
One can prove exit s v in a trace-free manner, by exhibiting an invariant K:State
Bool and proving that it holds for s:
Lemma 1 Assume that the following hold for all valid transitions trn = (s,a,o,s0)
such that K s holds:
ϕtrn ftrn 6=v
K s0
Then s.K s exit s v
Intuitively, the invariant Kensures that the value vcan never be produced. We use
this lemma in all our unwinding proofs for the ecomponents.
B.3 Unwinding Components
The predicate unwind_to is a generalization of unwind allowing a relation to unwind
not only to itself, but to the disjunction of any set of relations s={1, . . . , n}:
unwind_to ∆ ∆s≡ ∀svl s1vl1.reach ¬Tsreach s1svl s1vl1
((vl 6= [] vl1= []) reaction (disj s)s s vl s1vl1)
iaction (disj s)s s vl s1vl1
(vl 6= [] exit s(head vl))
21
Using this we formulate a compositional version of the unwinding theorem:
Compositional Unwinding Theorem. For each s, let nextsbe a (possibly
empty) “continuation” of , and let 0sbe a chosen “initial” relation. Assume the
following hold:
vl vl1.Bvl vl10istate vl istate vl1
s.unwind_to next
Then the (whole) system is BD-secure.
Proof idea. One can show that unwind (disj s)holds and use the original unwinding
theorem.
Since in our proofs we shall only need a linear “next” network, for convenience
we prove a linear variation of the above, where, assuming s={1, . . . , ∆n, ∆e}, we
allow ito only unwind to itself, to i+1(if i<n) or to an exit component e. In
addition, we employ the predicate unwind_cont to restrict the unwinding of ito proper
continuations (i.e., no exits) and the predicate unwind_exit to restrict the unwinding of
eto exits (as depicted in the paper’s Fig. 4):
unwind_cont ∆ ∆s≡ ∀svl s1vl1.reach ¬Tsreach s1svl s1vl1
((vl 6= [] vl1= []) reaction (disj s)s s vl s1vl1)
iaction (disj s)s s vl s1vl1
unwind_exit ≡ ∀svl s1vl1.reach ¬Tsreach s1svl s1vl1
vl 6= [] exit s(head vl)
Sequential Unwinding Theorem. For each s, let nextsbe a (possibly empty)
“continuation” of , and let 0sbe a chosen “initial” relation. Assume the following
hold:
vl vl1.Bvl vl10istate vl istate vl1
i∈ {1, . . . , n1}.unwind_cont i{i, ∆i+1, ∆e}
unwind_cont n{n, ∆e}
unwind_exit e
Then the (whole) system is BD-secure.
Proof idea. Immediate from the compositional unwinding theorem.
Employing the sequential unwinding theorem in our proofs of confidentiality prop-
erties had the benefit of allowing (and encouraging!) separation of concerns: the is and
the transitions between them constitute the main sequential flow of the phase-directed
proof, and etakes the “unnatural” corner cases out of our way.
22
Source Observation Filter
γ(s,a,o,s0)
Observation Producer
g(s,a,o,s0)
Value Filter
ϕ(s,a,o,s0)
Value Producer
f(s,a,o,s0)
Declassification Trigger
T(s,a,o,s0)
Declassification Bound
Bvl vl1
Paper Content
UIDs,PID userOf aUIDs (a,o)
o=outOK
(cid uid pw name abs pct.
a=Uact (uPaperC cid uid pw PID pct))
pct Paper Authorship
uid UIDs.cid.isAut s0uid PID
Last Uploaded Version
vl 6= [] 6=vl1
last vl =last vl1
Paper Authorship or
PC MembershipB
uid UIDs.cid.isAut s0uid PID
isPC s0uid cid phase s0cid Bidding
Absence of Any Upload
vl 6= []
Review
UIDs,PID,NuserOf aUIDs (a,o)
o=outOK
(cid uid pw rct.
a=Uact (uReview cid uid pw PID N rct)
a=UUact (uuReview cid uid pw PID N rct))
(phase scid,rct)Review Authorship
uid UIDs.cid.isRevNth s0uid PID N
Last Edited Version
Before Discussion and
All the Later Versions
BREV vl vl1
rct
Review Authorship or
Non-Conflict PC MembershipD
uid UIDs.cid.
isRevNth s0uid PID N
isPC s0cid uid PID pref s0uid 6=Conflict
Last Edited Version
vl 6= [] 6=vl1
last vl =last vl1
Review Authorship or
Non-Conflict PC MembershipDor
Paper AuthorshipN
uid UIDs.cid.
isRevNth s0uid PID N
isPC s0cid uid PID pref s0uid PID 6=Conflict
phase s0Discussion
isAut s0uid PID phase s0Notification
Absence of any Edit
vl 6= []
Discussion
UIDs,PID userOf aUIDs (a,o)
o=outOK
(cid uid pw com.
a=UUact (uuDis cid uid pw PID com))
com
Non-Conflict PC Membership
uid UIDs.cid.
isPC s0cid uid PID pref s0uid PID 6=Conflict
Absence of Any Edit
Decision
UIDs,PID userOf aUIDs (a,o)
o=outOK
(cid uid pw dec.
a=UUact (uuDec cid uid pw PID dec))
dec
Non-Conflict PC Membership
uid UIDs.cid.
isPC s0cid uid PID pref s0uid PID 6=Conflict
Last Edited Version
Non-Conflict PC Membership or
PC MembershipNor
Paper AuthorshipN
uid UIDs.cid.
isPC s0cid uid PID pref s0uid PID 6=Conflict
isPC s0cid uid PID phase s0Notification
isAut s0uid PID phase s0Notification
Absence of Any Edit
Reviewer
Assignment
UIDs,PID
userOf aUIDs (a,o)
o=outOK
(cid uid pw uid0.
a=Cact (cReview cid uid pw PID uid0))
fREVAs0cid uid0
Non-Conflict PC MembershipR
uid UIDs.cid.
isPC s0cid uid PID pref s0uid PID 6=Conflict
phase s0Reviewing
Non-Conflict PC Membership
BREVAvl vl1
Phase Stamps: B= Bidding, D= Discussion, N= Notification, R= Review
Fig. 6: The confidentiality properties, formally
C More Details on the Confidentiality Properties Formalization
Here we indicate the formal definitions of the BD security parameters used for the
statements in the paper’s Fig. ??.
Fig. 6 is an extension of Fig. ?? showing the formal instantiations of the observa-
tional infrastructure (Val, ϕ, f,Obs, γ, g)and declassification specification (B,T). (The
value and observation domains Val and Obs can be inferred from the expressions defin-
ing the functions.)
The properties are parametrized by the items shown below the sources:
the observers/attackers: a set of users UIDs
the source: a paper ID PID and, for reviews, an additional number N
PID identifies a paper paper sPID which, in turn, stores a paper content, a review
list, a discussion, and a decision. Thus, in the cases of paper content, discussion, and
decision, PID determines the data under scrutiny via the components of items of type
Paper stored in the state. PID also determines this data for the reviewer assignment, but
via the roles component of the state. On the the hand, for reviews, we need to further
indicate a number Nthat choses one of the multiple reviews associated to paper sPID.
In all cases, the observation filter simply checks that the user who performs the
action, userOf a, is one of the observers, i.e., is in UIDs; and the produced observation
consists of the action and the output.
The value filter ϕchecks if the action of the given transition (s,a,o,s0)has a cer-
tain form, e.g., is a paper or review update or a review creation, case in which fselects
the relevant item,3e.g., the title, abstract and content of a paper. For reviews, on one
occasion, besides the specific data consisting of the review content rct,falso returns
phase s, the phase of the conference—the latter is crucially used by the declassification
bound for reviews, to express nuances such as “the last edited version before discus-
sion.”
The declassification triggers and bounds are the ones from Fig. ??, but listed to-
gether with their formal definitions. We use the following general-purpose operators
and predicates:
-fst and snd return the first and second components of pairs (x,y)
-Pair takes x, then y(in a curries fashion), and returns the pair (x,y)
-distinct xl states that the list xl has no repetitions
-map hxl returns the list obtained by applying the function hto all the elements of the
list xl
-conj bl returns the conjunction of the list bl of booleans
The bound BREV vl vl1of the first review property is defined by decomposing vl
and vl1in the review edits during the reviewing phase and those during the discussion
phase—the former should have the same last element and the latter should be identical:
ul ul1wl.
vl = (map (Pair Reviewing)ul)·(map (Pair Discussion)wl)
3As a matter of variable scoping, falways refers to the items quantified by ϕ. E.g., for paper
content, the definition of frefers to the item rct whose existence is asserted by ϕ—in all such
cases, the referred items, if they exist, are also unique, hence fis well-defined.
24
vl1= (map (Pair Reviewing)ul1)·(map (Pair Discussion)wl)
ul 6= [] 6=ul1last ul =last ul1
At the reviewer assignment property, fREVAs0cid uid0returns a pair consisting of
the ID of the assigned reviewer and a Boolean flag indicating whether or not he is a
PC member having no conflict with the paper: (uid0,isPC cid uid0pref s0uid0PID 6=
Conflict). The bound BREVAvl vl1states about vl1that all its flags hold true (expressing
the public knowledge that reviewers are non-conflict PC members) and that all its user
IDs are distinct (expressing another, less important, public knowledge—that already as-
signed reviewers cannot be assigned again to the same paper PID): conj (map snd vl1)
distinct (map fst vl1).
D Some Unwinding Relations Used in Proofs
We proved all the confidentiality properties from Fig. 6 using the sequential unwind-
ing theorem. In the paper’s Fig. 5 we showed the unwinding components employed
in the proof of the first paper content property from Fig. 6 (PAP2). Next we show the
unwinding components of a few other properties—each time Brefers to the specific
declassification bound from the corresponding row in Fig. 6.
The Second Paper Content Property (PAP1)
The state-equality relation =PID is the same as for the first paper content property:
equality everywhere except on the content of the paper PID.
1svl s1vl1¬(cid.PID paperIDs scid)s=s1vl 6= []
2svl s1vl1(cid.PID paperIDs scid phase scid =Submission )s=PID s1
3svl s1vl1(cid.PID paperIDs scid phase scid >Submission )s=PID s1vl =vl1= []
esvl s1vl1(cid.PID paperIDs scid phase scid >Submission)vl 6= []
The First Review Property (RE V)
The state-equality relation =PID,Nexpresses equality everywhere except on the Nth
25
review of the paper PID.
1svl s1vl1(cid.PID paperIDs scid phase scid <Reviewing)s=s1Bvl vl1
2svl s1vl1
(cid.PID paperIDs scid phase scid =Reviewing
¬(uid.isRevNth scid uid PID N))
s=s1Bvl vl1
3svl s1vl1
(cid uid.PID paperIDs scid phase scid =Reviewing isRevNth suid PID N )
s=PID,Ns1Bvl vl1
4svl s1vl1
(cid uid.PID paperIDs scid phase scid Reviewing isRevNth suid PID N)
s=s1(wl.vl =vl1=map (Pair Discussion)wl )
esvl s1vl1
vl 6= []
(
(cid.PID paperIDs scid phase scid >Submission)
(cid.PID paperIDs scid phase scid >Reviewing ∧ ¬ (uid.isRevNth suid PID N))
(cid.PID paperIDs scid phase scid >Reviewing fst (head vl) = Reviewing)
)
The Second Review Property
Here we use two state-equality relations different from plain equality:
the relation =PID,Nfrom the first review property
the relation =2
PID,Nwhich expresses equality everywhere except on the older ver-
sions of the N’th review of paper PID (thus, s=2
PID,Ns1implies that the current,
i.e., last version of the review (N,PID)is the same in sas in s1)
Below, reviewsOf gives the list reviews of a paper and _!Ngives the N’th element of
a list; hence, reviewsOf (paper sPID)!Nreturns the N’th review of the paper PID,
review which is itself a list of its different versions.
1svl s1vl1the same as for the first review property (but with the specific B)
2svl s1vl1the same as for the first review property (but with the specific B)
3svl s1vl1
(cid uid.PID paperIDs scid phase scid ∈ {Reviewing,Discussion} ∧
isRevNth suid PID N )
s=PID,Ns1Bvl vl1
4svl s1vl1
(cid uid.PID paperIDs scid phase scid Reviewing
isRevNth suid PID N)
s=2
PID,Ns1vl =vl1= []
reviewsOf (paper sPID)!N6= [] 6=reviewsOf (paper s1PID)!N
esvl s1vl1
vl 6= []
(
(cid.PID paperIDs scid phase scid >Submission)
(cid.PID paperIDs scid phase scid >Reviewing ∧ ¬ (uid.isRevNth suid PID N))
(cid.PID paperIDs scid phase scid >Discussion)
)
26
Above, it is interesting to see how different versions of state equality are maintained
throughout the sequential unwinding components. In all cases we start with plain equal-
ity. In the first case (for PAP2), we switch to =PID (where we stay until the end). In the
second case (for REV), we switch to =PID,N, and then back to plain equality. In the third
case, we first switch to =PID,N, and then to =2
PID,N, which is stronger than =PID,Nbut
weaker than plain equality. Roughly speaking, the state-equality strength is turned up
and down throughout the proofs as follows:
it is turned down when the power of the observer increases,4e.g., in the discussion
phase when reviews become more public
it is turned up when the declassification bound allows, e.g., when consuming the
last values assumed to be equal
E Complementary Safety Properties
In the unwinding proofs, we needed about 20 safety properties, among which:
A paper is never registered at two conferences
An author always has conflict with his papers (DI S1)
A paper always has at least one author
A user never reviews a paper with which he has conflict
A user never gets to write more than one review for a given paper
F Complementary Forensic Properties
Our proved confidentiality properties show upper bounds on information release that
are valid unless/until some trigger Toccurs, e.g., authorship or PC membership. While
Tcan in principle depend on all four components of a transition (s,o,a,s0), our formal-
ized instances only depend on s0, e.g., isAut s0uid PID or isPC s0uid cid. Two questions
arise.
First, why do we consider the target state s0and not the source state s? The answer is
that typically the two choices are equivalent but our choice seems a priori more natural:
never T holding for a valid trace [(s1,a1,o1,s2),(s2,a2,o2,s3), . . . , (sn1,an1,on,sn)]
then means that the corresponding state condition fails for s2, . . . , sn(thus also includ-
ing the last state); and typically the condition also fails for the initial state s1=istate.
Second, why do we consider for Tan action-based condition and not an event-
based condition? E.g., instead of defining Tnot as being an author in the target state
(isAut s0uid PID), why not define it as successfully taking the action of becoming an
author (obeing outOK and abeing one of the two types of actions that makes uid an
author for PID, namely: uid creating the paper PID or another author uid0assigning uid
as a coauthor for PID). A first answer to this is that the two choices are equivalent (e.g.,
isAut s0uid PID occurs on a valid trace iff one of the aforementioned author-creation
actions occur successfully), and our choice is easier to state since it does not require the
bookkeeping of all the actions leading to a certain situation.
4This increase is kept under control by the assumed failure of the declassification trigger.
27
The second answer touches on a more fundamental concern triggered by the ques-
tion, namely: we have proved that one does not learn such and such unless one acquires
a certain role, e.g., authorship; but how can we know that only “legal” (or “intended”)
users acquire that role, in particular, how can we know that an arbitrary user cannot ac-
quire that role? The answer is: regardless on how we phrase the trigger, this concern has
to be addressed separately. To this end, we track back in valid traces all possible chains
of events that have led to certain roles—the result is a form of integrity properties that
we call forensics.
For instance, we prove that, if isAut suid PID holds at the end of a valid trace5
[trn1,trn2, . . . , trnn](i.e., such that sis the target state of trn), then there exists a sub-
sequence of transitions [trni1, . . . , trnik]such that ik=nand the following hold:
in trni1some user uid1successfully registers (i.e., creates) the paper PID
for all j∈ {1, .. . , k1}, in trnijsome user uidijsuccessfully adds the user uidij+1
as a coauthor
In other words, authorship of PID can only occur by a chain of coauthor assignment
started from the paper’s creator. We prove such forensic properties for all the trigger
components used in our security properties:
1. isChair scid uid holds at the end of a valid trace tr iff there is a subsequence of tr
starting with the creation of the conference cid by a user uid1, and continuing with
a chain of successful chair creation actions having subjects uid1, . . . , uidkin which
uidk=uid and each uidiassigns uidi+1as a chair
2. isPC scid uid holds at the end of a valid trace tr iff
either isChair scid uid holds at some point in tr
or there exists uid06=uid such that isChair scid uid0holds at some point in tr
and after that point there is a successful action with uid0assigning uid as a chair
or as a committee member
3. isRevNth suid PID N holds at the end of a valid trace tr iff isPC scid uid holds at
some point in tr, there exists uid0such that isChair scid uid0holds at some point in
tr, and after both these points there is a successful review-creation action by uid0
that assigns uid as the N’th reviewer of PID.
4. pref suid PID 6=Conflict holds at the end of a valid trace tr iff tr does not contain
a successful creation by uid of the paper PID
or a successful assignment of uid as a coauthor of PID6
or a successful declaration of conflict by uid with PID which is not overwritten
by any successful removal of the conflict by uid
or a successful declaration, by some uid0for which isAut suid0PID holds,
of a conflict between uid and PID which is not overwritten by any successful
removal of the conflict by uid 7
5Thanks to the prefix closure of the set of valid traces, by analyzing the forensics of a property
holding at the end of a valid trace we also cover the case of that property holding at some point
on a valid trace.
6Either of the last two actions automatically produces an irrevocable conflict of uid with PID.
7Allowing a PC member to remove the conflict declared by an author is a feature of our system
aimed at preventing deadlocks caused by authors declaring too many conflicts.
28
5. If ph >noPh, then phase scid =ph holds at the end of a valid trace tr iff there
is a subsequence of tr of successful cid-change-phase actions starting from noPh
and advancing sequentially through phases until ph such that the first action in the
sequence is performed by the voronkov and the remaining ones are performed by
cid chairs.
Note that, to obtain the full forensic story of isPC, one needs to consider that of isChair;
likewise, the full forensic story of isRevNth relies on those of isChair and isPC, and
similarly for the other properties.
29
... It offers various features and functionalities tailored to the specific needs of conference planning and execution. While the features may vary depending on the CMS platform, the following paragraphs describe some standard and essential features found in these systems (Kanav et al., 2014;Gupta et al., 2013;Ahmad et al., 2012;Daimi & Li, 2011;Kalmukov, 2011;Ishak & Zaibon, 2008).  User Management: One of the foundational aspects of a CMS is user management. ...
... Security and privacy are prioritised in CMSs, with stringent measures to protect user data and comply with data protection regulations (Kanav et al., 2014). Moreover, CMSs are designed to be scalable, effectively accommodating conferences of varying sizes and complexities. ...
... The goal is to ensure optimal performance even during peak usage periods, guaranteeing users a reliable and responsive experience (Hasnain, 2021). Security and data privacy will maintain their prominence in CMS development efforts, strongly emphasising robust security measures, encryption, and strict adherence to data protection regulations (Kanav et al., 2014). The seamless integration of CMS with external systems, including payment gateways, content management systems, and academic databases, will be further refined. ...
Article
Full-text available
Once reliant on manual processes and paper-based submissions, academic conferences have evolved into dynamic platforms for knowledge sharing, collaboration, and innovation through Conference Management Systems (CMS). CMS has become a key driver of change and progress in conference organisations. These centralised hubs connect authors, reviewers, organisers, and attendees. To date, various CMS models have been developed with different features. Understanding these features is crucial to ensure that conference organisers receive the best support. A previous study has reviewed various CMS platforms, highlighting their distinct features and strengths. However, this research needed to address CMS technology's evolution and future advancements. Through a literature review and comparative analysis of CMS features, this paper outlines the typical features of CMS that can be useful for organisers. Further, this paper explores the transformative evolution of CMS and looks ahead to the promising future. This evolution reflects a dedication to continuous innovation, adapting to changing needs, and embracing advanced technologies to enhance user experiences while ensuring security and privacy. The future of CMS is marked by innovation, adaptability, and a solid commitment to providing improved experiences for all conference stakeholders.
... Verification techniques for workflows (cf. [4,16,21]) typically build on classic notions of secrecy such as non-interference [17]. e particular challenge with verifying web-based workflow systems is that here is no fixed set of agents participating in the workflow. ...
... ere have recently been many efforts to verify concrete workflow systems, such as conference management systems [2,21] or an eHealth system [7], or a social media platform [5]. For instance, the C C conference management system [21] is implemented and checked in the interactive theorem prover I . ...
... ere have recently been many efforts to verify concrete workflow systems, such as conference management systems [2,21] or an eHealth system [7], or a social media platform [5]. For instance, the C C conference management system [21] is implemented and checked in the interactive theorem prover I . Its security model uses a specialized non-interference notion (based on nondeducibility [30]), which is motivated by the need for fine-grained declassification conditions. ...
Preprint
We consider the automatic verification of information flow security policies of web-based workflows, such as conference submission systems like EasyChair. Our workflow description language allows for loops, non-deterministic choice, and an unbounded number of participating agents. The information flow policies are specified in a temporal logic for hyperproperties. We show that the verification problem can be reduced to the satisfiability of a formula of first-order linear-time temporal logic, and provide decidability results for relevant classes of workflows and specifications. We report on experimental results obtained with an implementation of our approach on a series of benchmarks.
... Three major verification case studies will also be briefly described while recalling their contribution to the framework's design (Section 3). These are the CoCon conference management system (Section 3.1, [23,37]), the CoSMed social media platform (Section 3.2, [7,9]), and the CoSMeDis distributed extension of CoSMed (Section 3.3, [8]). ...
... ▶ Lemma 1. [23,37] Assume ∆ is a BD unwinding and let σ 1 , σ 2 ∈ State such that reach ¬ T σ 1 and reach σ 2 . Then, for all tr 1 ∈ TraceF σ1 and sl 1 , ...
... ▶ Theorem 2. (Unwinding Theorem [23,37]) Assume that the following hold: ...
Conference Paper
Full-text available
We describe Bounded-Deducibility (BD) security, an expressive framework for the specification and verification of information-flow security. The framework grew by confronting concrete challenges of specifying and verifying fine-grained confidentiality properties in some realistic web-based systems. The concepts and theorems that constitute this framework have an eventful history of such "confrontations", often involving trial and error, which are reported in previous papers. This paper is the first to focus on the framework itself rather than the case studies, gathering in one place all the abstract results about BD security.
... Nonetheless, formal verification has seen recent wide-spread success. Notable applications include a verified OS kernel (Klein et al. 2009), a verified SAT-solver (Blanchette et al. 2018), a verified model checker (Esparza et al. 2013), a verified conference system (Kanav, Lammich, and Popescu 2014), a verified optimised C compiler (Leroy 2009), and, in the context of planning, verified validators (Abdulaziz and Lammich 2018;Abdulaziz and Koller 2022), and algorithms for bounding plan lengths (Abdulaziz, Norrish, and Gretton 2018). ...
... The fourth listing is a CNF-DIMACS formula. most successful algorithm verification efforts (Klein et al. 2009;Blanchette et al. 2018;Esparza et al. 2013;Kanav, Lammich, and Popescu 2014). ...
Article
We present an executable formally verified SAT encoding of ground classical AI planning problems. We use the theorem prover Isabelle/HOL to perform the verification. We experimentally test the verified encoding and show that it can be used for reasonably sized standard planning benchmarks. We also use it as a reference to test a state-of-the-art SAT-based planner, showing that it sometimes falsely claims that problems have no solutions of certain lengths.
... Then one devises more optimised versions of the algorithm, and only proves the optimizations correct in this latter step, thus separating mathematical reasoning from implementation specific reasoning. This approach was used in most successful algorithm verification efforts (Klein et al. 2009;Esparza et al. 2013;Kanav, Lammich, and Popescu 2014). In this work, the three most important stages are the initial abstract algorithm, an implementation with abstract data structures, and finally an implementation with concretized data structures. ...
Article
We present a methodology based on interactive theorem proving that facilitates the development of verified implementations of algorithms for solving factored Markov Decision Processes. As a case study, we formally verify an algorithm for approximate policy iteration in the proof assistant Isabelle/HOL. We show how the verified algorithm can be refined to an executable, verified implementation. Our evaluation on benchmark problems shows that it is practical. As part of the development, we build verified software to certify linear programming solutions. We discuss the verification process and the modifications we made to the algorithm during formalization.
... Then one devises more optimised versions of the algorithm, and only proves the optimizations correct in this latter step, thus separating mathematical reasoning from implementation specific reasoning. This approach was used in most successful algorithm verification efforts [24,15,23]. In this work, the three most important stages are the initial abstract algorithm, an implementation with abstract data structures, and finally an implementation with concretized data structures. ...
Preprint
Full-text available
We formally verify an algorithm for approximate policy iteration on Factored Markov Decision Processes using the interactive theorem prover Isabelle/HOL. Next, we show how the formalized algorithm can be refined to an executable, verified implementation. The implementation is evaluated on benchmark problems to show its practicability. As part of the refinement, we develop verified software to certify Linear Programming solutions. The algorithm builds on a diverse library of formalized mathematics and pushes existing methodologies for interactive theorem provers to the limits. We discuss the process of the verification project and the modifications to the algorithm needed for formal verification.
Chapter
We study coercion-resistance for online exams. We propose two new properties, Anonymous Submission and Single-Blindness which preserve the anonymity of the links between tests, test takers, and examiners even when the parties coerce one another into revealing secrets. The properties are relevant: not even Remark!, a secure exam protocol that satisfies anonymous marking and anonymous examiners, results to be coercion resistant. Then, we propose a coercion-resistance protocol which satisfies, in addition to known anonymity properties, the two novel properties we have introduced. We prove our claims formally in ProVerif. The paper has also another contribution: it describes an attack (and a fix) to an exponentiation mixnet that Remark! uses to ensure unlinkability. We use the secure version of the mixnet in our new protocol.
Preprint
Full-text available
The technology of formal software verification has made spectacular advances, but how much does it actually benefit the development of practical software? Considerable disagreement remains about the practicality of building systems with mechanically-checked proofs of correctness. Is this prospect confined to a few expensive, life-critical projects, or can the idea be applied to a wide segment of the software industry? To help answer this question, the present survey examines a range of projects, in various application areas, that have produced formally verified systems and deployed them for actual use. It considers the technologies used, the form of verification applied, the results obtained, and the lessons that can be drawn for the software industry at large and its ability to benefit from formal verification techniques and tools.
Chapter
Decision systems are at the core of our democratic and meritocratic processes. Systems for voting, procurement, grant management, and competitive examinations all rest on submission, evaluation, and ranking. Computer assistance is a critical part of modern decision systems and so are cybersecurity challenges. As decision systems get increasingly complex, the classic approach of enforcing security through fail-safe mechanisms preventing cybersecurity attacks becomes infeasible. A recent trend in cybersecurity is to disincentivize potential attacks by using deterrence-based mechanisms that make stakeholders accountable for their actions. However, using such mechanisms requires knowledge of the underlying technology, which is not accessible to all people.This poster looks at ways to extend decision systems with user-accountable mechanisms enabling users to verify correct executions and provide dispute resolution capabilities by combining cryptographic techniques for human senses with advanced cryptographic protocols. If successful, this line of work will provide novel ways to secure decision systems by creating disincentivizing mechanisms that are accessible to any human user.
Conference Paper
Full-text available
While intransitive noninterference is a natural property for any secure OS kernel to enforce, proving that the implementation of any particular general-purpose kernel enforces this property is yet to be achieved. In this paper we take a significant step towards this vision by presenting a machine-checked formulation of intransitive noninterference for OS kernels, and its associated sound and complete unwinding conditions, as well as a scalable proof calculus over nondeterministic state monads for discharging these unwinding conditions across a kernel's implementation. Our ongoing experience applying this noninterference framework and proof calculus to the seL4 microkernel validates their utility and real-world applicability.
Conference Paper
Full-text available
SAFE is a clean-slate design for a highly secure computer system, with pervasive mechanisms for tracking and limiting information flows. At the lowest level, the SAFE hardware supports fine-grained programmable tags, with efficient and flexible propagation and combination of tags as instructions are executed. The operating system virtualizes these generic facilities to present an information-flow abstract machine that allows user programs to label sensitive data with rich confidentiality policies. We present a formal, machine-checked model of the key hardware and software mechanisms used to control information flow in SAFE and an end-to-end proof of noninterference for this model.
Conference Paper
Full-text available
Cloud computing means entrusting data to information systems that are managed by external parties on remote servers, in the “cloud”, raising new privacy and confidentiality concerns. We propose a general technique for designing cloud services that allows the cloud to see only encrypted data, while still allowing it to perform data-dependent computations. The technique is based on key translations and mixes in web browsers. We focus on the particular cloud computing application of conference management. We identify the specific security and privacy risks that existing systems like EasyChair and EDAS pose, and address them with a protocol underlying ConfiChair, a novel cloud-based conference management system that offers strong security and privacy guarantees. In ConfiChair, authors, reviewers, and the conference chair interact through their browsers with the cloud, to perform the usual tasks of uploading and downloading papers and reviews. In contrast with current systems, in ConfiChair the cloud provider does not have access to the content of papers and reviews and the scores given by reviewers, and moreover is unable to link authors with reviewers of their paper. We express the ConfiChair protocol and its properties in the language of ProVerif, and prove that it does provide the intended properties.
Conference Paper
Full-text available
Two new logics for verification of hyperproperties are proposed. Hyperproperties characterize security policies, such as noninterference, as a property of sets of computation paths. Standard temporal logics such as LTL, CTL, and CTL* can refer only to a single path at a time, hence cannot express many hyperproperties of interest. The logics proposed here, HyperLTL and HyperCTL*, add explicit and simultaneous quantification over multiple paths to LTL and to CTL*. This kind of quantification enables expression of hyperproperties. A model checking algorithm for a fragment of HyperLTL is given. The algorithm has been implemented in a prototype model checker.
Article
SAFE is a clean-slate design for a highly secure computer system, with pervasive mechanisms for tracking and limiting information flows. At the lowest level, the SAFE hardware supports fine-grained programmable tags, with efficient and flexible propagation and combination of tags as instructions are executed. The operating system virtualizes these generic facilities to present an information-flow abstract machine that allows user programs to label sensitive data with rich confidentiality policies. We present a formal, machine-checked model of the key hardware and software mechanisms used to control information flow in SAFE and an end-to-end proof of noninterference for this model.
Article
This paper presents Strong Dependency, a formalism based on an information theoretic approach to information transmission in computational systems. Using the formalism, we show how the imposition of initial constraints reduces variety in a system, eliminating undesirable information paths. In this way, protection problems, such as the Confinement Problem, may be solved. A variety of inductive techniques are developed useful for proving that such solutions are correct.
Article
We consider noninterference formulations of security policies [7] in which the "interferes" relation is intransitive. Such policies provide a formal basis for several real security concerns, such as channel control [17, 18], and assured pipelines [4]. We show that the appropriate formulation of noninterference for the intransitive case is that developed by Haigh and Young for "multidomain security" (MDS) [9, 10]. We construct an "unwinding theorem" [8] for intransitive polices and show that it differs signicantly from that of Haigh and Young. We argue that their theorem is incorrect. A companion report [22] presents a mechanically-checked formal specication and verication of our unwinding theorem. We consider the relationship between transitive and intransitive formulations of security. We show that the standard formulations of non-interference and unwinding [7, 8] correspond exactly to our intransitive formulations, specialized to the transitive case. We show that transitive polices are precisely the "multilevel security" (MLS) polices, and that any MLS secure system satises the conditions of the unwinding theorem. We also consider the relationship between noninterference formulations of security and access control formulations, and we identify the "reference monitor assumptions" that play a crucial role in establishing the soundness of access control implementations.
Article
A unified narrative exposition of the ESD/MITRE computer security model is presented. A suggestive interpretation of the model in the context of Multics and a discussion of several other important topics (such as communications paths, sabotage and integrity) conclude the report. A full, formal presentation of the model is included in the Appendix.