Optimistic synchronizationbased statespace reduction
ABSTRACT Reductions that aggregate finegrained transitions into coarser transitions can significantly reduce the cost of automated verification, by reducing the size of the state space. We propose a reduction that can exploit common synchronization disciplines, such as the use of mutual exclusion for accesses to shared data structures. Exploiting them using traditional reduction theorems requires checking that the discipline is followed in the original (i.e., unreduced) system. That check can be prohibitively expensive. This paper presents a reduction that instead requires checking whether the discipline is followed in the reduced system. This check may be much cheaper, because the reachable state space is smaller.

Article: Type inference against races
[show abstract] [hide abstract]
ABSTRACT: The race condition checker rccjava uses a formal type system to statically identify potential race conditions in concurrent Java programs, but it requires programmersupplied type annotations. This paper describes a type inference algorithm for rccjava. Due to the interaction of parameterized classes and dependent types, this type inference problem is NPcomplete. This complexity result motivates our new approach to type inference, which is via reduction to propositional satisfiability. This paper describes our type inference algorithm and its performance on programs of up to 30,000 lines of code.Science of Computer Programming. 01/2007;
Page 1
Optimistic SynchronizationBased StateSpace Reduction
Scott D. Stoller∗
Ernie Cohen†
13 March 2006
Abstract
Reductions that aggregate finegrained transitions into coarser transitions can significantly
reduce the cost of automated verification, by reducing the size of the state space. We propose
a reduction that can exploit common synchronization disciplines, such as the use of mutual
exclusion for accesses to shared data structures. Exploiting them using traditional reduction
theorems requires checking that the discipline is followed in the original (i.e., unreduced) system.
That check can be prohibitively expensive. This paper presents a reduction that instead requires
checking whether the discipline is followed in the reduced system. This check may be much
cheaper, because the reachable state space is smaller.
1Introduction
For many concurrent software systems, a straightforward model of the system has such a large and
complicated state space that automated verification, by automated theoremproving or statespace
exploration (model checking), is infeasible. Reduction is an important technique for reducing the
size of the state space by aggregating transitions into coarsergrained transitions.
When exploring the state space of a concurrent system, context switches between threads are
typically allowed before each transition. A simple example of a reduction for concurrent systems is
to inhibit context switches before transitions that access only unshared variables. This effectively
increases the granularity of transitions. Thus, one can regard this and similar reductions as defining
a reduced system, which is a coarsergrained version of the original system. The reduced system
may have dramatically fewer states than the original system. A reduction theorem asserts that
certain properties are preserved by the transformation.
We consider a more powerful reduction that exploits common synchronization disciplines. For
example, in a system that uses mutual exclusion on accesses to some shared variables—called
protected variables—our reduction inhibits context switches before transitions that access only un
shared variables and protected variables. Such transitions are called invisible transitions; other
transitions are called visible transitions. Informally, this reduction is safe because protected vari
ables cannot be accessed concurrently, and allowing context switches before the synchronization
operations (lock acquire, etc.) that protect them is sufficient. The modelchecking experiments
∗This work was supported in part by NSF under Grants CCR9876058, CCR0205376, and CNS0509230 and ONR
under Grants N000140110109 and N000140210363. Address: Computer Science Dept., Stony Brook University,
Stony Brook, NY 117944400. Email: stoller@cs.sunysb.edu
†Microsoft Corp. Email: ernie.cohen@acm.org
Web: http://www.cs.sunysb.edu/˜stoller/
1
Page 2
reported in [Sto02] are based on a similar reduction, which decreased memory usage (which is pro
portional to the number of states) by a factor of 25 or more. Such reductions can also decrease
the computational cost of the automated theoremproving needed for threadmodular verification
[FFQ02, FQS02].
Traditional reduction theorems, such as [Lip75, CL98, Coh00], can also exploit such synchroniza
tion disciplines. However, a hypothesis of these traditional theorems is that the allegedly protected
variables are indeed protected (by synchronization that enforces mutual exclusion) in the original
(i.e., unreduced) system. How can we establish this? Static analyses like [FF01] can automatically
provide a conservative approximation but sometimes return “don’t know”. For general finitestate
systems, it might seem that the only way to automatically obtain exact information about whether
the synchronization discipline is followed (i.e., the selected variables are actually protected) is to
express this condition as a history property and check it by statespace exploration of the original
system. But this would be about as expensive as checking correctness requirements on the original
system, making the reduction almost pointless.
Our reduction theorem implies that one can determine exactly during statespace exploration
of the reduced system whether the synchronization discipline is followed in the original system.
For generality, the reduction theorem is expressed without explicit reference to mutual exclusion
or synchronization. It is expressed in terms of a predicate q, which in the application of the
reduction theorem to mutual exclusion synchronization is chosen to be the history predicate “the
synchronization discipline has been violated”. The theorem assumes that the transition relation of
each thread is partitioned into invisible transitions and visible transitions, as described above. This
allows us to define the reduced system, in which context switches are allowed only immediately
before visible transitions. The reduction theorem states that if the original system has a reachable
state in which q holds, then so does the reduced system, provided the invisible transitions satisfy
several conditions, most notably that (i) a transition cannot enable or disable invisible transitions
of other threads, (ii) as long as q is false (in other words, as long as the synchronization discipline
is followed), a transition commutes to the right of an invisible transition of another thread, and
(iii) invisible transitions cannot falsify q (in other words, they cannot hide a violation of the
synchronization discipline).
In the application to mutual exclusion synchronization, informally, the first condition above
holds because synchronization operations that may block are visible; the second condition holds
because, in the absence of violations of the synchronization discipline, the set of variables accessed
by a transition is disjoint from the set of variables accessed by an immediately following invisible
transition of another thread, because accesses to a protected variable by different threads are
separated by intervening synchronization operations; and the third condition holds because, once
the synchronization discipline has been violated, it remains violated for the rest of the execution.
Note that one needs to prove only once that the hypotheses of the reduction theorem hold when
instantiated for mutual exclusion synchronization; this establishes applicability of the reduction
theorem to all systems that use such synchronization.The role of this proof is analogous to
2
Page 3
the role of proofs needed with traditional partialorder methods to show validity of a proposed
independence relation on operations of a data type (e.g., queues or locks).
To apply the reduction theorem to a system that uses mutual exclusion synchronization, the user
guesses which variables are protected (this determines which transitions are visible, as described
above) and how they are protected. The latter is done by supplying exclusive access predicates
[FQ03]. For each protected variable x and each thread i, there is an exclusive access predicate
ex
a transition that accesses x. Mutual exclusion is expressed by the requirement that, for every
variable x and every two distinct threads i and j, ex
hold simultaneously). Locks, by themselves or in the form of monitors, are probably the most
widely used synchronization mechanism. For systems that use them, we describe in Section 9 how
to automatically guess which variables are protected (by monitors) and determine the associated
exclusive access predicates.
Our reduction theorem is designed to be used together with traditional reduction theorems.
Suppose a traditional reduction theorem asserts that some property φ is preserved by the reduction
if the original system follows the synchronization discipline. After checking that the reduced system
follows the discipline and satisfies φ, one can use our reduction theorem to conclude that the original
system follows the discipline, and then use the traditional reduction theorem to conclude that the
original system satisfies φ.
A simple example of checking the hypotheses of a reduction during statespace exploration of the
reduced system is mentioned in [HP95]. There, the hypotheses to be checked are whether specified
processes ever access specified variables. Proving soundness in that case is relatively easy, because
the hypotheses are unaffected by reordering of the events in an execution.
The reduction in [Sto02] is similar in spirit to the one in this paper. The main contributions
of this paper relative to [Sto02] are a reduction that applies to systems that use arbitrary synchro
nization mechanisms to achieve mutual exclusion (the results in [Sto02] apply only when monitors
are used), and significantly shorter and cleaner proofs, based on ωalgebra. Similar results could
presumably be proved in a transitionsystem framework, like the one in [God96], but our experience
attempting to do that suggests that the algebraic framework makes the proofs easier to discover,
shorter, and cleaner.
The main contribution of this paper compared to an earlier version [SC03] is a more liberal
definition of “invisible transition”, which allows some synchronization operations—for example, the
release, notify, and notifyAll operations on monitors—to be classified as invisible. The definition
in [SC03] forces, roughly speaking, all synchronization operations to be classified as visible.
Our method and traditional partialorder methods (e.g., stubborn sets [Val97], ample sets
[CGP99], and persistent sets [God96]) both exploit independence (commutativity) of transitions,
but our method can establish independence of transitions—and hence achieve a reduction—in many
cases where traditional partialorder methods cannot. Traditional partialorder methods, as im
plemented in tools such as Spin [Hol97] and VeriSoft [God97], use two kinds of information to
i. The synchronization discipline requires that ex
ihold in states from which thread i can execute
iand ex
jare mutually exclusive (i.e., cannot
3
Page 4
determine independence of transitions: programspecific information, obtained by static analysis,
about which processes may perform which operations on which objects (e.g., only process P2sends
messages on channel C1), and manually supplied programindependent information about depen
dencies between operations on selected datatypes (e.g., a send operation on a full channel is disabled
until a receive operation is performed on that channel).
Our method has two main advantages over traditional partialorder methods. First, our method
can exploit more complicated programspecific information to determine independence of transi
tions, e.g., the invariant that a particular variable is always protected by particular synchronization
constructs. Such invariants take into account the context (specifically, synchronization context) in
which operations are performed. In contrast, traditional partialorder methods are based on anal
ysis of which operations are performed by each thread with little regard for the context in which
the operations occur. Second, our method does not rely on any conservative static analysis. In
contrast, traditional partialorder methods rely on conservative static analysis to determine which
processes may perform which operations on which objects; for example, static analysis may be used
to determine whether more than one thread can invoke a given operation on the queue accessed
by a given program statement. For programs in relatively simple modeling languages, inexpensive
and precise static analysis of such properties is feasible. For programs that contain references (or
pointers), arrays, procedure calls, and dynamic thread creation, conservative static analyses will
generally be imprecise. This imprecision will cause opportunities for reduction to be overlooked,
decreasing the effectiveness of the traditional partialorder method. Since our method does not rely
on any conservative static analysis, it has no difficulty with references, etc.
Section 2 presents some motivating examples. Section 3 introduces omega algebra, which is a
simple and powerful framework for reductions. Section 4 presents and proves the reduction theorem.
The theorem is expressed in a very general algebraic style and is applicable to a variety of system
models, e.g., shared variables or message passing. Section 5 defines a simple model of concurrent
systems with shared variables, and Section 6 defines a synchronization discipline based on mutual
exclusion. Section 7 shows that the reduction theorem applies in that context. Section 8 presents a
methodology for using the reduction. Section 9 describes how the methodology can be automated
for systems that use monitors for synchronization. Section 10 uses a simple example to compare
our reduction with traditional partialorder methods.
2 Motivating Examples
This section describes three examples of systems for which the current reduction is more effective (at
reducing the number of explored states) than traditional partialorder methods and the reduction in
[Sto02]. For the first example, we explain why in some detail; explanations for the other examples
are roughly similar. These examples are based mainly on descriptions in [SBN+97] of code in real
systems.
4
Page 5
Semaphores.
thread, supplying the operation type (read or write), a buffer, and a semaphore as arguments, and
then waits for completion of the operation by invoking down() on the semaphore. The device driver
thread receives the request, performs the operation (reading or writing the buffer as appropriate),
and then calls up() on the semaphore. The buffers can be classified as protected variables, allowing
transitions that access them to be classified as invisible by our reduction.
For concreteness, consider a system with two user threads and one driver thread, running the
following pseudocode. The ellipses represent the actual device access and other operations. Each
thread’s local variables are subscripted by a thread identifier. Uppercase letters denote control
points.
A user thread gets a buffer from a buffer pool, sends request to a device driver
user1 :Ab1= getBuf();BsendRequest(READ,b1,s1);Cdown(s1);Dread(b1);E···
user2 :Ab2= getBuf();Bwrite(b2);CsendRequest(WRITE,b2,s2);Ddown(s2);E···
driver :Awhile (true)
BreceiveRequest(opd,bd,sd);
Cif (opd= READ)···Dwrite(bd)···else···Eread(bd)···
Fup(sd)
Thread user1 has exclusive access to the buffer to which b1points when user1 is at control point
B, D, or E. Other threads have similar exclusive access predicates for buffers. Let pcidenote the
program counter of thread i. The exclusive access predicates for buffers are:
eb
eb
eb
user1
=
=
=
∗b1= b ∧ pc1∈ {B,D,E}
∗b2= b ∧ pc2∈ {B,C,E}
∗bd= b ∧ pcd∈ {C,D,E,F}
Consider a state s0in which pc1= C ∧ pc2= B ∧ pcd= D. With the reduction in this paper,
buffers are protected variables, so reads and writes of buffers are invisible, and our reduction inhibits
context switches before them. Accesses to unshared variables, such as opd, are also invisible. Thus,
the driver will receive the request, test the condition on opd, and write to the buffer without any
intervening context switches. In contrast, traditional partialorder methods, even sophisticated
ones, will allow a context switch before the driver’s access to the buffer; this increases the number
of explored states. For concreteness, consider selective search using persistent sets computed by
the conditional stubborn set algorithm (CSSA) [God96]. A persistent set in a state s is a subset
of the enabled transitions in s that satisfies certain conditions. The selective search explores, from
each state, only a persistent set of transitions. The conditions in the definition of persistent set
ensure that this preserves certain properties of the state space, such as reachability of deadlocks.
CSSA is parameterized by a statically determined binary dependence relation, called mightbe
thefirsttointerferewith, on operations. In this example, static alias analysis determines that b2
and bdmay be aliased, i.e., they may point to the same buffer (at the same or different times).
Consequently, the mightbethefirsttointerferewith relation relates each write(b2) operation with
each write(bd) operation, and so on. In state s0, user2 and the driver have enabled transitions that
user2
driver
5
Page 6
perform write(b2) and write(bd), respectively, so CSSA includes transitions of both threads in the
persistent set. The reduction in [Sto02], which is based on analysis of locks, is not effective for this
system, because it uses semaphores.
Memory Reuse.
when they are not in use. These objects may be protected by different locks each time they are
reused, violating the locking discipline of [Sto02]. For example, consider a file system in which
blocks in a file are protected by the lock associated with (the inode of) that file, and blocks on the
free list are protected by the lock associated with the free list. A block may be in a different file,
and hence protected by a different lock, each time it is reused. Let mFdenote the lock associated
with the free list. Let mfdenote the lock associated with file f. The exclusive access predicate eb
for a block b might be
Some systems reuse objects (or structures) by placing them on a free list
i
(onFreeList(b) ∧ mF.owner = i) ∨ (∃ file f : allocatedTo(b,f) ∧ mf.owner = i)
MasterWorker Paradigm.
worker threads. Typically, each task is represented by an object created by the master thread and
passed to a worker thread. The master thread does not access a task object after passing it to
a worker. Task objects can be classified as protected. Suppose each worker thread w has a field
w.task that refers to the worker’s task. For a task object x, the exclusive access predicate ex
holds before x has been passed to a worker thread, and ex
In the masterworker paradigm, a master thread assigns tasks to
master
wholds when w.task = x.
3 Omega Algebra
An omega algebra is an algebraic structure over the operators (listed in order of increasing prece
dence) 0 (nullary), 1 (nullary), + (binary infix), · (binary infix, usually written as simple juxtapo
sition), ? (binary infix, same precedence as ·),∗(unary suffix), andω(unary suffix), satisfying the
following axioms1:
(x + y) + z
=
=
=
=
=
=
=
=
=
x + (y + z)
y + x
x
x
(x y) z
0
x
x y + x z
x z + y z
x ≤ y
⇔
x + y = y
x + y
x + x
0 + x
x (y z)
0 x = x 0
1 x = x 1
x (y + z)
(x + y) z
x∗
=
⇒
⇒
1 + x + x∗x∗
x y∗= x
x∗y = y
x y ≤ x
x y ≤ y
(* ind R)
(* ind L)
x ? y
xω
=
=
⇒
xω+ x∗y
x xω
x ≤ y ? zx ≤ y x + z
(? ind)
(Here, as throughout the paper, in displayed formulas and theorems variables w,x,y,z are implicitly
universally quantified over all omega algebra terms.) In parsing formulas, · and ? associate to the
1The axioms are equivalent to Kozen’s axioms for Kleene algebra [Koz94], plus the three axioms for omega terms.
6
Page 7
right; e.g., u v ? x? y parses and expands to (u·(vω+v∗·(xω+x∗·y))). In proofs, we use the hint
“(distributivity)” to indicate application of the distributivity laws, and the hint “(hyp)” to indicate
the use of hypotheses. In induction steps that use induction, we use the hint “t1t2≤ t1; (* ind R)”
to indicate use of the first induction axiom (with t1for x and t2for y), and dually for the second
induction axiom. If xiis a finite collection of terms over the range of i, we write (+i : xi) and
(·i : xi) for the sum and product, respectively, of these terms.
These axioms are sound and complete for the usual equational theory of omegaregular ex
pressions; more precisely, completeness holds only for standard terms, where the first arguments
to ·,ω, and ? are regular. Thus, we make free use, without proof, of familiar equations from
the theory of (omega)regular languages (e.g., x∗x∗= x∗, (1 + x)∗= x∗), indicated by the hint
“(regular algebra)”. When an (in)equality appears as a hint without other reference, this hint is
implicit.
y is a complement of x iff x y = 0 = y x and x + y = 1. It is easy to show that complements
(when they exist) are unique and that complementation is an involution; a predicate is an element
of the algebra with a complement. In this paper, p and q (possibly with subscripts) range over
predicates, with complements p and q. It is easy to show that the predicates form a Boolean
algebra, with + as disjunction, · as conjunction, 0 as false, 1 as true, complementation as negation,
and ≤ as implication. Equations true in all Boolean algebras (e.g., p q = q p) are freely used in
proofs, indicated by the hint “(Boolean algebra)”; such a hint implicitly carries the claim that all
of the terms in the hint denote predicates.
The omega algebra axioms support several interesting programming models, where (intuitively)
0 is magic2, 1 is skip, + is chaotic nondeterministic choice, · is sequential composition, ≤ is refine
ment, x∗is executed by executing x any finite number of times, and xωis executed by executing x
an infinite number of times. The results of this paper are largely motivated by the relational model,
where terms denote binary relations over a state space, 0 is the empty relation, 1 is the identity
relation, · is relational composition, + is union,∗is reflexivetransitive closure, ≤ is subset, and xω
relates an input state s to an output state if there is an infinite sequence of states starting with s,
with consecutive states related by x. Thus, xωrelates an input state to either all states or none,
and xω= 0 iff x is wellfounded. Predicates are identified with the identity relation on the set of
states in their domain; thus, a predicate can be executed, as a noop, from the states in which it
holds. Define ? = 1ω. One can show that ? is the maximal element under ≤, and in the relational
model, it relates all pairs of states (because it relates an input state s to an output state if there
is an infinite sequence of states starting with s and with consecutive states related by the identity
relation, and there is such a sequence).
In addition to equational identities of regular languages, we will use the following standard
theorems (more sophisticated theorems of this type appear in [Coh00]). Algebraic lemmas and
theorems in this paper are presented as numbered equations followed by some vertical space followed
2magic is the program that has no possible executions (and so satisfies every possible specification). Of course, it
cannot be implemented.
7
Page 8
by a formal proof.
x y ≤ y z ⇒ x∗y ≤ y z∗
(1)
x∗y
x∗y z∗
y z∗
≤
=
{1 ≤ z∗
{x y z∗≤ y z∗(below); (* ind L)}
}
x y z∗
y z z∗
y z∗
≤
≤
{x y ≤ y z (hyp)
{z z∗≤ z∗
}
}
✷
y x ≤ x y ⇒ (x + y)∗= x∗y∗
(2)
(x + y)∗
x∗y∗(x + y)∗
x∗y∗
(x + y)∗
≤
=
≤
{1 ≤ x∗y∗
{x∗y∗(x + y) ≤ x∗y∗(below); (* ind R)}
{(regular algebra)
}
}
Since the first and last terms are equal, the first and third terms are equal.
x∗y∗(x + y)
x∗y∗x + x∗y∗y
x∗x y∗+ x∗y∗y
x∗y∗
=
≤
≤
{(distributivity)
{y x ≤ x y (hyp), so y∗x ≤ x y∗(1)}
{x∗x ≤ x∗; y∗y ≤ y∗
}
}
✷
y x ≤ (x + 1) y ⇒ (x + y)∗= x∗y∗
(3)
(x + y)∗
(x + 1 + y)∗
(x + 1)∗y∗
x∗y∗
=
=
=
{(regular algebra)
{y (x + 1) ≤ (x + 1) y (below); (2)}
{(x + 1)∗= x∗
}
}
y (x + 1)
y x + y 1
(x + 1) y + 1 y
(x + 1) y
=
≤
=
{(distributivity)
{y x ≤ (x + 1) y (hyp); y 1 = y = 1 y}
{1 ≤ x + 1
}
}
✷
8
Page 9
4A Reduction Theorem
We consider systems composed of a fixed, finite, nonempty set of concurrent processes (each perhaps
internally concurrent and nondeterministic). Variables i and j range over process indices. Each
process i has a visible action viand an invisible action ui3, where the invisible action is constrained
to neither receive information from other processes nor to send information to other processes so as
to create a race condition in the recipient. This constraint is guaranteed only so long as some global
synchronization policy is followed. For example, in a system where processes are synchronized using
locks, either visible or invisible actions of process i might modify variables that are either local to
process i or protected by locks held by process i, release locks, or send asynchronous messages to
other processes; but only visible actions can acquire locks or wait for a condition to hold. Note
that violation of the synchronization discipline (e.g., an action accessing a shared variable without
first obtaining an appropriate lock) might cause a race condition between an invisible action and
the actions of another process, violating the constraint on invisible actions.
To avoid introducing temporal operators, we introduce a Boolean history variable q that records
whether the synchronization discipline has been violated at some point in the execution. Predicate
pimeans that process i cannot perform an invisible action, i.e., that uiis disabled. Let p be the
conjunction of the pi’s:
p = (·i : pi).
A state satisfying p is called visible; thus, in a visible state, all invisible transitions are disabled.
We now define several actions, formalized in the definitions (5)–(11) below. An Mi action
consists of a visible action of process i followed by a sequence of invisible actions of process i. An
Niaction is an Miaction that is “maximal” (i.e., further uiactions are disabled) and that finishes
in a state where the synchronization discipline has not been violated. Niis effectively the transition
relation of thread i in the reduced system. Additional conditions will imply that executing an N
action in a visible state results in a visible state; thus, in the reduced system, context switches
occur only in visible states. A u (respectively v, M, N) action is a ui(respectively, vi, Mi, Ni)
action of some process i. Finally, an R action is executable iff (i) the discipline has been violated,
or (ii) such a violation is possible after execution of a single M action. Like xω, R relates each
initial state to either all final states or none.
(4)
Mi
Ni
=
=
=
=
=
viu∗
Mipiq
(+i : ui)
(+i : vi)
(+i : Mi)
i
(5)
(6)
(7)
(8)
(9)
u
v
M
3Note that ui and vi can be sums of nondeterministic actions that correspond to individual transitions of process
i.
9
Page 10
N
=
=
(+i : Ni)
(1 + M) q ?
(10)
(11)
R
Our reduction theorem says that if the original system can reach a violation of the synchroniza
tion discipline starting from some visible state, then the reduced system can also reach a violation
starting from the same initial state, except that the violation might occur partway through the last
transition of the reduced system (i.e., the last transition might be an M action rather than an N
action). The transition relations of the original and reduced systems are u+v and N, respectively.
Thus, the conclusion of the reduction theorem is p (u + v)∗q ≤ N∗R. This says that if a state
s2satisfying q is reachable from a visible state s1in the original system—in other words, ?s1,s2?
is in the relation p (u + v)∗q—then ?s1,s2? is also in N∗R. Expanding the definition of R and
recognizing that ? is the full relation, this means that for some s3, ?s1,s2? is in N∗(1 + M) q,
i.e., a state satisfying q is reachable from s1in the reduced system, except that the last transition
might be incomplete (i.e., it might be an M instead of an N).
The hypotheses of our reduction theorem are as follows, formalized in formulas (13)–(21) below.
It is impossible to execute invisible actions of a single process forever without violating the discipline
(13); in other words, the process eventually executes a visible transition, violates the discipline,
or gets stuck. This hypothesis is needed to show that, if a violation occurs in the original system
when nultiple threads are in invisible states, all threads except the one causing the violation can
be advanced to visible states (or to an earlier violation) in a finite number of steps; thus, it suffices
to allow a single M transition in the conclusion of the reduction.
An action cannot enable or disable an invisible action of another process; specifically, piholds
after ujor vjiff it holds before (14),(15). In the absence of a violation, an action commutes to the
right of an invisible action of another process; specifically, if executing ujor vjfollowed by uileads
from a state s1to a state s2, and we try to move the ujor vjto the right by executing it after the
ui, then one of three outcomes must occur: a violation occurs after the ui, a violation occurs after
the ujor vj, or we reach the same state s2(16),(17).
The next two hypotheses say that pi holds iff ui is disabled. The first of them says that pi
implies uiis disabled; specifically, no state is reachable by executing uifrom a state where piholds
(18). The second of them says that uiis disabled implies pi; specifically, in every state s1, either
uiis enabled (leading to some state s2, which ? relates to s1) or piholds (recall that predicates
are modeled as subsets of the identity relation) (19).
Visible and invisible actions of a process cannot be simultaneously enabled; specifically, no
state is reachable by executing vi from a state satisfying pi (20). Invisible actions cannot hide
violations of the discipline, i.e., if q holds before ui, then q holds after ui; specifically, the subset of
uicontaining pairs whose first state satisfies q is a subset of the subset of uicontaining pairs whose
second state satisfies q (21).
Define, for any x,
[x] = x + q ? + x q ?
(12)
10
Page 11
Intuitively, [x] behaves like x, except that it is allowed to behave arbitrarily if started in a state
where the discipline has been violated, and may behave arbitrarily after performing x if x results
in a state where the discipline has been violated.
(uiq)ω= 0
ujpi= piuj
vjpi= pivj
ujui≤ ui[uj]
vjui≤ ui[vj]
piui= 0
1 ≤ pi+ ui?
pivi= 0
q ui≤ uiq
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
i ?= j
i ?= j
i ?= j
i ?= j
⇒
⇒
⇒
⇒
Our reduction theorem can be used to check not only the synchronization discipline, but also
the invariance of any other predicate I such that violations of I cannot be hidden by invisible
actions. To see this, note that, except for (21), the conditions above are all monotonic in q. Thus,
if all the conditions above (including (21)) are satisfied for a predicate q, and there is a predicate I
such that I ui≤ uiI for each i, then all the conditions are still satisfied if q is replaced with q +I.
The proof below can be viewed as formalizing the following construction, which starts from an
execution that violates the discipline and produces an execution of the reduced system that also
violates the discipline. First, we try to move invisible uiactions to the left of uj and vj actions,
where i ?= j, starting from the left (i.e., from the leftmost uiaction that immediately follows a uj
or vj action). The uiaction cannot make it all the way to the beginning of the execution (since
p ui= 0), so it must eventually run into either another uior a vi. Repeating this produces an
execution in which a sequence of M actions leads to a violation of the discipline.
Next, we try to turn all but the last of these M actions into N actions, starting from the next
to last M action. In general, we will have done this for some number of M actions, so we will have
an execution that ends with N∗R. Now try to convert the last Mibefore the N∗R suffix into an
N action. Suppose this Miaction ends with uienabled. uimust then also be enabled later when
the discipline is first violated (because (14) and (15) imply Nj does not affect enabledness of ui,
and (20) implies Niis disabled when uiis enabled), so we add a uiaction just after the violation
and try to push it backward (through the N∗(1 + M)). This may create additional violations of
the discipline, but there will always be an N∗R to the right of the new ui. Eventually, uimakes
it back to the Mi, extending Miwith another ui. By (13), ui’s cannot continue forever without
violating the discipline, so repeating this extension process eventually either gives us a violation
right after Mi(in which case we have produced a new N∗R action, so we can discard everything
after it) or lead to the ui’s being disabled, in which case we have succesfully turned the Miaction
into an N action and again turned the extended execution into an execution that ends with N∗R.
Repeating this for each Miaction, moving from right to left, produces the desired execution of the
11
Page 12
reduced system.
Theorem 1 Let P be a finite set, and let i and j range over P. For all uiand vi, using definitions
(4)–(11), if hypotheses (13)–(21) hold, then p (u + v)∗q ≤ N∗R.
Proof. The proof below is topdown; in other words, we prove lemmas used in a proof after the
proof itself. Thus, within the proof of formula n, we may use only formulas with labels greater
than n and results proved before now (i.e., formulas with labels less than (22), the label on the
toplevel proof below). The toplevel proof works as follows: push u’s left (lines 12) where they
are eliminated by the initial p (line 3), push M’s to the left of R’s (line 4), condense the R’s to a
single R (lines 56), and finally turn the M’s into N’s (lines 78).
p (u + v)∗q ≤ N∗R
(22)
p (u + v)∗q
p (u + M + R)∗q
p u∗(M + R)∗q
(M + R)∗q
M∗R∗q
M∗(1 + R) q
M∗R
M∗N∗R
N∗R
≤
≤
≤
≤
≤
≤
≤
=
{v ≤ M + R (23)
{(M + R) u ≤ (1 + u) (M + R) (24);(3)}
{p u∗≤ 1 (25)
{R M ≤ (M + 1) R (27); (3)
{R∗= 1 + R (28)
{(1 + R) q ≤ R (29)
{1 ≤ N∗
{M N∗R ≤ N∗R (30); (* ind L)
}
}
}
}
}
}
}
✷
A v is either an M or an R:
v ≤ M + R
(23)
v
(+i : vi)
(+i : viui∗)
(+i : Mi)
M
M + R
=
≤
=
=
≤
{(8)
{1 ≤ ui∗
{viu∗
{(9)
{(regular algebra)}
}
}
}
}
i= Mi(5)
✷
A u moves to the left of an M or R (but may disappear in the process):
(M + R) u ≤ (1 + u) (M + R) (24)
12
Page 13
(M + R) u
M u + R u
M u + R
(+j : Mj) (+i : ui) + R
(+i,j : Mjui) + R
(+i,j : (pi+ ui) (Mj+ R)) + R
(+i,j : (1 + ui) (Mj+ R)) + R
(+i : 1 + ui) (+j : Mj+ R) + R
(1 + (+i : ui)) ((+j : Mj) + R) + R
(1 + u) (M + R) + R
(1 + u) (M + R)
=
≤
=
=
≤
≤
=
=
≤
=
{(distributivity)
{R u ≤ R (34)
{M = (+j : Mj) (9); u = (+i : ui) (7)}
{(distributivity)
{Mjui≤ (pi+ ui) (Mj+ R)) (38)
{pi≤ 1
{(distributivity)
{(distributivity)
{(+i : ui) = u (7);(+j : Mj) = M (9) }
{R ≤ (1 + u) (M + R)
}
}
}
}
}
}
}
}
✷
A p swallows up u’s to the right:
p u∗≤ 1 (25)
p u∗
p
1
=
≤
{p u ≤ p (26); (* ind R)}
{(Boolean algebra)
}
✷
A p swallows up a single u to the right:
p u ≤ p
(26)
p u
p (+i : ui)
(+i : p ui)
(+i : piui)
(+i : 0)
0
p
=
=
≤
=
=
≤
{u = (+i : ui) (7)
{(distributivity)
{p ≤ pi(4), (Boolean algebra)}
{piui= 0 (18)
{(distributivity)
{(regular algebra)
}
}
}
}
}
✷
An M moves left past an R (possibly disappearing in the process):
R M ≤ (M + 1) R
(27)
R M
R
(M + 1) R
≤
≤
{(34)
{1 ≤ M + 1}
}
✷
A sequence of R’s can be reduced to at most one R:
R∗= 1 + R
(28)
13
Page 14
R∗
1 + R + R R∗
1 + R + R
1 + R
R∗
=
=
=
≤
{z∗= 1 + z + z z∗
{R R ≤ R (34); (* ind R)}
{(regular algebra)
{(regular algebra)
}
}
}
Since the first and last terms are equal, the first and fourth terms are equal.
✷
(1 + R) q ≤ R
(29)
(1 + R) q
q + R q
q + R
R
=
≤
≤
{(distributivity)
{R q ≤ R (34)
{q = 1 q ≤ (1 + M) q ≤ (1 + M) q ? = R (11)}
}
}
✷
An N∗R action swallows up Miactions to its left:
MiN∗R ≤ N∗R
The following proof says that an N∗R action can be used to generate uiactions to its left until it
either produces a discipline violation (q) or until it has produced enough ui’s to turn the Mito its
left into an N:
(30)
MiN∗R
≤{N∗R ≤ (uiq)N∗R + (pi+ uiq)N∗R (32);
{(? ind) w. x := n∗R, y := uiq, z := (pi+ uiq)N∗R}
{(uiq)ω= 0 (13); definition of ?
{q ≤ 1
{Miu∗
{(distributivity)
{Miui≤ Miu∗
{1 = q + q
{(distributivity)
{pi≤ 1
{Mipiq = Ni(6); Ni≤ N(10)
{(distributivity)
{N N∗≤ N∗; N∗R ≤ ?
{Miq ? ≤ R (11)
{R ≤ N∗R
}
Mi(uiq) ? (pi+ uiq) N∗R
Mi(uiq)∗(pi+ uiq) N∗R
Miu∗
Mi(pi+ uiq) N∗R
(Mipi+ Miuiq) N∗R
(Mipi+ Miq) N∗R
(Mipi(q + q) + Miq) N∗R
(Mipiq + Mipiq + Miq) N∗R
(Mipiq + Miq) N∗R
(N + Miq) N∗R
N N∗R + Miq N∗R
N∗R + Miq ?
N∗R + R
N∗R
≤
≤
≤
=
≤
=
=
≤
≤
=
≤
≤
≤
}
}
}
}
}
}
}
}
}
}
}
}
}
i(pi+ uiq) N∗R
i= Mi(31)
i≤ Mi(31)
✷
Miactions swallow uiactions to their right:
Miu∗
i≤ Mi
(31)
14
Page 15
Miu∗
viu∗
viu∗
Mi
i
=
=
=
{Mi= viu∗
{u∗
{viu∗
i(5)}
iiu∗
iiu∗
i= u∗
i= Mi(5)}
}
i
✷
N∗R can be used to generate (to its left) either a uior a pi(indicating that uino longer has
invisible operations to perform):
N∗R ≤ (uiq) N∗R + (pi+ uiq) N∗R
(32)
N∗R
N∗(1 + M) q ?
N∗(1 + M) q (pi+ ui) ?
N∗(1 + M) (q pi+ q ui) ?
N∗(1 + M) (piq + q ui) ?
N∗(1 + M) (piq + uiq) ?
N∗(1 + M) (pi+ ui) q ?
N∗((pi+ ui) + M (pi+ ui)) q ?
N∗((pi+ ui) + (pi+ ui) (M + R)) q ?
N∗(pi+ ui) (1 + M + R) q ?
N∗(pi+ ui) R
=
≤
=
≤
≤
=
=
≤
=
≤
≤
{R = (1 + M) q ? (11)
{1 ≤ pi+ ui? (19)
{(distributivity)
{q pi= piq (Boolean algebra)
{q ui≤ uiq (21)
{(distributivity)
{(distributivity)
{M (pi+ ui) ≤ (pi+ ui) (M + R) (38)
{(distributivity)
{(1 + M + R) q ? ≤ R (33)
{N (pi+ ui) ≤ (pi+ ui) (N + R) (35);
{(1) with x := N, y := pi+ ui, z := N + R}
{R N ≤ R (34) ≤ (1 + N) R; (3)
{R R ≤ R (34); (* ind R)
{1 = q + q
{(distributivity)
}
}
}
}
}
}
}
}
}
}
}
(pi+ ui) (N + R)∗R
(pi+ ui) N∗R∗R
(pi+ ui) N∗R
(pi+ ui(q + q)) N∗R
(uiq) N∗R + (pi+ uiq) N∗R
≤
=
≤
=
}
}
}
}
✷
(1 + M + R) q ? ≤ R
(33)
(1 + M + R) q ?
(1 + M) q ? + R q ?
(1 + M) q ? + R
R
=
≤
≤
{(distributivity)
{R q ? ≤ R (34)
{(1 + M) q ? = R (11)}
}
}
✷
R’s swallow up z’s to the right:
R z ≤ R
(34)
R z
(1 + M) q ? z
(1 + M) q ?
R
=
=
=
{R = (1 + M) q ? (11)}
{? z ≤ ?
{(1 + M) q ? = R (11)}
}
15