ArticlePDF Available

Abstract

The automata-theoretic approach to linear temporal logic uses the theory of automata as a unifying paradigm for program specification, verification, and synthesis. Both programs and specifications are in essence descriptions of computations. These computations can be viewed as words over some alphabet. Thus, programs and specifications can be viewed as descriptions of languages over some alphabet. The automata-theoretic perspective considers the relationships between programs and their specifications as relationships between languages. By translating programs and specifications to automata, questions about programs and their specifications can be reduced to questions about automata. More specifically, questions such as satisfiability of specifications and correctness of programs with respect to their specifications can be reduced to questions such as nonemptiness and containment of automata. Unlike classical automata theory, which focused on automata on finite words, the applications to program specification, verification, and synthesis, use automata on infinite words, since the computations in which we are interested are typically infinite. This paper provides an introduction to the theory of automata on infinite words and demonstrates its applications to program specification, verification, and synthesis.
An Automata-Theoretic Approach
to Linear Temporal Logic
Moshe Y. Vardi
?
Rice University
Department of Computer Science
P.O. Box 1892
Houston, TX 77251-1892, U.S.A.
Email: vardi@cs.rice.edu
URL: http://www.cs.rice.edu/
vardi
Abstract. The automata-theoretic approach to linear temporal logic uses the
theory of automata as a unifying paradigm for program specification, verification,
and synthesis. Both programs and specifications are in essence descriptions of
computations. These computations can be viewed as words over some alphabet.
Thus, programs and specifications can be viewedas descriptions of languages over
some alphabet. The automata-theoretic perspective considers the relationships
between programs and their specifications as relationships between languages. By
translating programs and specifications to automata, questions about programs and
their specifications can be reduced to questions about automata.More specifically,
questions such as satisfiability of specifications and correctness of programs with
respect to their specifications can be reduced to questions such as nonemptiness
and containment of automata.
Unlike classical automata theory, which focused on automata on finite words, the
applications to program specification, verification, and synthesis, use automata
on infinite words, sincethe computations in which we are interested are typically
infinite. This paper provides an introduction to the theory of automata on infinite
words and demonstrates its applications to program specification, verification,
and synthesis.
1 Introduction
While program verification was always a desirable, but never an easy task, the advent of
concurrent programming has made it significantly more necessary and difficult. Indeed,
the conceptual complexity of concurrency increases the likelihood of the program con-
taining errors. To quote from [OL82]: “There is a rather large body of sad experience to
indicate that a concurrent program can withstand very careful scrutiny without revealing
its errors.”
The first step in program verification is to come up with a formal specification of the
program. One of the more widely used specification languages for concurrent programs
is temporal logic [Pnu77, MP92]. Temporal logic comes in two varieties: linear time and
branching time ([EH86, Lam80]); we concentrate here on linear time. A linear temporal
?
Part of this work was done at the IBM Almaden Research Center.
specification describes the computations of the program, so a program satisfies the
specification (is correct) if all its computations satisfy the specification. Of course, a
specification is of interest only if it is satisfiable. An unsatisfiable specification cannot
be satisfied by any program. An often advocated approach to program development is to
avoid the verification step altogether by using the specification to synthesize a program
that is guaranteed to be correct.
Our approach to specification, verification, and synthesis is based on an intimate
connection between linear temporal logic and automata theory, which was discussed
explicitly first in [WVS83] (see also [LPZ85, Pei85, Sis83, SVW87, VW94]). This
connection is based on the fact that a computation is essentially an infinite sequence
of states. In the applications that we consider here, every state is described by a finite
set of atomic propositions, so a computation can be viewed as an infinite word over
the alphabet of truth assignments to the atomic propositions. The basic result in this
area is the fact that temporal logic formulas can be viewed as finite-state acceptors.
More precisely, given any propositional temporal formula, one can construct a finite
automaton on infinite words that accepts precisely the computations satisfied by the
formula [VW94]. We will describe the applications of this basic result to satisfiability
testing, verification, and synthesis. (For an extensive treatment of the automata-theoretic
approach to verification see [Kur94]).
Unlike classical automata theory, which focused on automata on finite words, the
applications to specification, verification, and synthesis, use automata on infinite words,
since the computations in which we are interested are typically infinite. Before going
into the applications, we give a basic introduction to the theory of automata on infinite
words. To help the readers build their intuition, we review the theory of automata on
finite words and contrast it with the theory of automata on infinite words. For a more
advanced introduction to the theory of automata on infinite objects, the readers are
referred to [Tho90].
2 Automata Theory
We are given a finite nonempty alphabet
. A finite word is an element of
, i.e., a
finite sequence
a
0
;:::;a
n
of symbols from
. An infinite word is an element of
!
,
i.e., an
!
-sequence2
a
0
; a
1
;:::
of symbols from
. Automata on finite words define
(finitary) languages, i.e., sets of finite words, while automata on infinite words define
infinitary languages, i.e., sets of infinite words.
2.1 Automata on Finite Words Closure
A (nondeterministic finite) automaton
A
is a tuple
(
; S; S
0
; ; F
)
, where
is a finite
nonempty alphabet,
S
is a finite nonempty set of states,
S
0
S
is a nonempty set of
initial states,
F
S
is the set of accepting states, and
:
S
!
2
S
is a transition
function. Intuitively,
(
s; a
)
is the set of states that
A
can move into when it is in state
s
and it reads the symbol
a
. Note that the automaton may be nondeterministic, since
it may have many initial states and the transition function may specify many possible
2
!
denotes the first infinite ordinal.
transitions for each state and symbol. The automaton
A
is deterministic if
j
S
0
j
=
1 and
j
(
s; a
)
j
1 for all states
s
2
S
and symbols
a
2
. An automaton is essentially an
edge-labeled directed graph: the states of the automaton are the nodes, the edges are
labeled by symbols in
, a certain set of nodes is designated as initial, and a certain set
of nodes is designated as accepting. Thus,
t
2
(
s; a
)
means that that there is edge from
s
to
t
labeled with
a
. When
A
is deterministic, the transition function
can be viewed
as a partial mapping from
S
to
S
, and can then be extended to a partial mapping
from
S
to
S
as follows:
(
s; "
) =
s
and
(
s; xw
) =
(
(
s; x
)
; w
)
for
x
2
and
w
2
.
Arun
r
of
A
on a finite word
w
=
a
0
;:::;a
n
1
2
is a sequence
s
0
;:::;s
n
of
n
+
1 states in
S
such that
s
0
2
S
0, and
s
i
+
1
2
(
s
i
; a
i
)
for 0
i < n
. Note that a
nondeterministic automaton can have many runs on a given input word. In contrast, a
deterministic automaton can have at most one run on a given input word. The run
r
is
accepting if
s
n
2
F
. One could picture the automaton as having a green light that is
switched on whenever the automaton is in an accepting state and switched off whenever
the automaton is in a non-accepting state. Thus, the run is accepting if the green light
is on at the end of the run. The word
w
is accepted by
A
if
A
has an accepting run on
w
. When
A
is deterministic,
w
2
L
(
A
)
if and only if
(
s
0
; w
)
2
F
, where
S
0
=
f
s
0
g
.
The (finitary) language of
A
, denoted
L
(
A
)
, is the set of finite words accepted by
A
.
An important property of automata is their closure under Boolean operations. We
start by considering closure under union and intersection.
Proposition1. [RS59] Let
A
1
; A
2be automata. Then there is an automaton
A
such that
L
(
A
) =
L
(
A
1
)
[
L
(
A
2
)
.
Proof: Let
A
1
= (
; S
1
; S
0
1
;
1
; F
1
)
and
A
2
= (
; S
2
; S
0
2
;
2
; F
2
)
. Without loss of
generality, we assume that
S
1and
S
2are disjoint. Intuitively, the automaton
A
nonde-
terministically chooses
A
1or
A
2and runs it on the input word.
Let
A
= (
; S; S
0
; ; F
)
, where
S
=
S
1
[
S
2,
S
0
=
S
0
1
[
S
0
2,
F
=
F
1
[
F
2, and
(
s; a
) =
1
(
s; a
)
if
s
2
S
1
2
(
s; a
)
if
s
2
S
2
It is easy to see that
L
(
A
) =
L
(
A
1
)
[
L
(
A
2
)
.
We call
A
in the proof above the union of
A
1and
A
2, denoted
A
1
[
A
2.
Proposition2. [RS59] Let
A
1
; A
2be automata. Then there is an automaton
A
such that
L
(
A
) =
L
(
A
1
)
\
L
(
A
2
)
.
Proof: Let
A
1
= (
; S
1
; S
0
1
;
1
; F
1
)
and
A
2
= (
; S
2
; S
0
2
;
2
; F
2
)
. Intuitively, the
automaton
A
runs both
A
1and
A
2on the input word.
Let
A
= (
; S; S
0
; ; F
)
, where
S
=
S
1
S
2,
S
0
=
S
0
1
S
0
2,
F
=
F
1
F
2, and
((
s; t
)
; a
) =
1
(
s; a
)
2
(
t; a
)
. It is easy to see that
L
(
A
) =
L
(
A
1
)
\
L
(
A
2
)
.
We call
A
in the proof above the product of
A
1and
A
2, denoted
A
1
A
2.
Note that both the union and the product constructions are effective and polynomial
in the size of the constituent automata.
Let us consider now the issue of complementation. Consider first deterministic
automata.
Proposition3. [RS59] Let
A
= (
; S; S
0
; ; F
)
be a deterministic automaton, and let
A
= (
; S; S
0
; ; S
F
)
, then
L
(
A
) =
L
(
A
)
.
That is, it is easy to complement deterministic automata; we just have to complement
the acceptance condition. This will not work for nondeterministic automata, since a
nondeterministic automaton can have many runs on a given input word; it is not enough
that some of these runs reject (i.e., not accept) the input word, all runs should reject
the input word. Thus, it seems that to complement nondeterministic automaton we first
have to determinize it.
Proposition4. [RS59] Let
A
be a nondeterministic automaton. Then there is a deter-
ministic automaton
A
d
such that
L
(
A
d
) =
L
(
A
)
.
Proof: Let
A
= (
; S; S
0
; ; F
)
. Then
A
d
= (
;
2
S
;
f
S
0
g
;
d
; F
d
)
. The state set of
A
d
consists of all sets of states in
S
and it has a single initial state. The set
F
d
=
f
T
j
T
\
F
6
=
;g
is the collection of sets of states that intersect
F
nontrivially. Finally,
d
(
T; a
) =
f
t
j
t
2
(
s; a
)
for some
s
2
T
g
.
Intuitively,
A
d
collapses all possible runs of
A
on a given input word into one run
over a larger state set. This construction is called the subset construction. By combining
Propositions 4 and 3 we can complement a nondeterministic automata. The construction
is effective, but it involves an exponential blow-up, since determinization involves an
exponential blow-up (i.e., if
A
has
n
states, then
A
d
has 2
n
states). As shown in [MF71],
this exponential blow-up for determinization and complementation is unavoidable.
For example, fix some
n
1. The set of all finite words over the alphabet
=
f
a; b
g
that have an
a
at the
n
th position from the right is accepted by the automaton
A
= (
;
f
0
;
1
;
2
;:::;n
g
;
f
0
g
; ;
f
n
g
)
, where
(
0
; a
) =
f
0
;
1
g
,
(
0
; b
) =
f
0
g
, and
(
i; a
) =
(
i; b
) =
f
i
+
1
g
for 0
< i < n
. Intuitively,
A
guesses a position in the input
word, checks that it contains
a
, and then checks that it is at distance
n
from the right
end of the input.
Suppose that we have a deterministic automaton
A
d
= (
; S;
f
s
0
g
;
d
; F
)
with
fewer than 2
n
states that accepts this same language. Recall that
d
can be viewed as a
partial mapping from
S
to
S
. Since
j
S
j
<
2
n
, there must be two words
uav
1and
ubv
2of length
n
for which
d
(
s
0
; uav
1
) =
d
(
s
0
; ubv
2
)
. But then we would have that
d
(
s
0
; uav
1
u
) =
d
(
s
0
; ubv
2
u
)
; that is, either both
uav
1
u
and
ubv
2
u
are members of
L
(
A
d
)
or neither are, contradicting the assumption that
L
(
A
d
)
consists of exactly the
words with an
a
at the
n
th position from the right, since
j
av
1
u
j
=
j
bv
2
u
j
=
n
.
2.2 Automata on Infinite Words Closure
Suppose now that an automaton
A
= (
; S; S
0
; ; F
)
is given as input an infinite word
w
=
a
0
; a
1
;:::
over
. A run
r
of
A
on
w
is a sequence
s
0
; s
1
; : : :
, where
s
0
2
S
0and
s
i
+
1
2
(
s
i
; a
i
)
, for all
i
0. Since the run is infinite, we cannot define acceptance by
the type of the final state of the run. Instead we have to consider the limit behavior of
the run. We define lim
(
r
)
to be the set
f
s
j
s
=
s
i
for infinitely many
i
’s
g
, i.e., the set of
states that occur in
r
infinitely often. Since
S
is finite, lim
(
r
)
is necessarily nonempty.
The run
r
is accepting if there is some accepting state that repeats in
r
infinitely often,
i.e., lim
(
r
)
\
F
6
=
;
. If we picture the automaton as having a green light that is switched
on precisely when the automaton is in an accepting state, then the run is accepting if the
green light is switched on infinitely many times. The infinite word
w
is accepted by
A
if there is an accepting run of
A
on
w
. The infinitary language of
A
, denoted
L
!
(
A
)
, is
the set of infinite words accepted by
A
.
Thus,
A
can be viewed both as an automaton on finite words and as an automaton
on infinite words. When viewed as an automaton on infinite words it is called a B¨
uchi
automaton [B¨uc62].
Do automata on infinite words have closure properties similar to those of automata
on finite words? In most cases the answer is positive, but the proofs may be more
involved. We start by considering closure under union. Here the union construction does
the right thing.
Proposition5. [Cho74] Let
A
1
; A
2be B¨
uchi automata. Then
L
!
(
A
1
[
A
2
) =
L
!
(
A
1
)
[
L
!
(
A
2
)
.
One might be tempted to think that similarly we have that
L
!
(
A
1
A
2
) =
L
!
(
A
1
)
\
L
!
(
A
2
)
, but this is not the case. The accepting set of
A
1
A
2is the product of the
accepting sets of
A
1and
A
2. Thus,
A
1
A
2accepts an infinite word
w
if there are
accepting runs
r
1and
r
2of
A
1and
A
2, respectively, on
w
, where both runs go infinitely
often and simultaneously through accepting states. This requirement is too strong. As a
result,
L
!
(
A
1
A
2
)
could be a strict subset of
L
!
(
A
1
)
\
L
!
(
A
2
)
. For example, define the
two B¨uchi automata
A
1
= (
f
a
g
;
f
s; t
g
;
f
s
g
; ;
f
s
g
)
and
A
2
= (
f
a
g
;
f
s; t
g
;
f
s
g
; ;
f
t
g
)
with
(
s; a
) =
f
t
g
and
(
t; a
) =
f
s
g
. Clearly we have that
L
!
(
A
1
) =
L
!
(
A
2
) =
f
a
!
g
,
but
L
!
(
A
1
A
2
) =
;
.
Nevertheless, closure under intersection does hold.
Proposition6. [Cho74] Let
A
1
; A
2be B¨
uchi automata. Then there is a B¨
uchi automaton
A
such that
L
!
(
A
) =
L
!
(
A
1
)
\
L
!
(
A
2
)
.
Proof:Let
A
1
= (
; S
1
; S
0
1
;
1
; F
1
)
and
A
2
= (
; S
2
; S
0
2
;
2
; F
2
)
.Let
A
= (
; S; S
0
; ; F
)
,
where
S
=
S
1
S
2
f
1
;
2
g
,
S
0
=
S
0
1
S
0
2
f
1
g
,
F
=
F
1
S
2
f
1
g
, and
(
s
0
; t
0
; j
)
2
((
s; t; i
)
; a
)
if
s
0
2
1
(
s; a
)
,
t
0
2
2
(
t; a
)
, and
i
=
j
, unless
i
=
1 and
s
2
F
1, in which case
j
=
2, or
i
=
2 and
t
2
F
2, in which case
j
=
1.
Intuitively, the automaton
A
runs both
A
1and
A
2on the input word. Thus, the
automaton can be viewed has having two “tracks”, one for each of
A
1and
A
2. In
addition to remembering the state of each track,
A
also has a pointer that points to one
of the tracks (1 or 2). Whenever a track goes through an accepting state, the pointer
moves to the other track. The acceptance condition guarantees that both tracks visit
accepting states infinitely often, since a run accepts iff it goes infinitely often through
F
1
S
2
f
1
g
. This means that the first track visits infinitely often an accepting state
with the pointer pointing to the first track. Whenever, however, the first track visits an
accepting state with the pointer pointing to the first track, the pointer is changed to point
to the second track. The pointer returns to point to the first track only if the second
track visits an accepting state. Thus, the second track must also visit an accepting state
infinitely often.
Thus, B¨uchi automata are closed under both union and intersection, though the con-
struction for intersection is somewhat more involvedthan a simple product. The situation
is considerably more involved with respect to closure under complementation. First, as
we shall shortly see, B¨uchi automata are not closed under determinization, i.e., non-
deterministic B¨uchi automata are more expressive than deterministic B¨uchi automata.
Second, it is not even obvious how to complement deterministic B¨uchi automata. Con-
sider the deterministic B¨uchi automaton
A
= (
; S; S
0
; ; F
)
. One may think that it
suffices to complement the acceptance condition, i.e., to replace
F
by
S
F
and define
A
= (
; S; S
0
; ; S
F
)
. Not going infinitely often through
F
, however, is not the
same as going infinitely often through
S
F
. A run might go through both
F
and
S
F
infinitely often. Thus,
L
!
(
A
)
may be a strict superset of
!
L
!
(
A
)
. For example,
Consider the B¨uchi automaton
A
= (
f
a
g
;
f
s; t
g
;
f
s
g
; ;
f
s
g
)
with
(
s; a
) =
f
t
g
and
(
t; a
) =
f
s
g
. We have that
L
!
(
A
) =
L
!
(
A
) =
f
a
!
g
.
Nevertheless, B¨uchi automata (deterministic as well as nondeterministic) are closed
under complementation.
Proposition7. [B ¨uc62] Let
A
be a B ¨
uchi automaton over an alphabet
. Then there is
a (possibly nondeterministic) B ¨
uchi automaton
A
such that
L
!
(
A
) =
!
L
!
(
A
)
.
The construction in [B ¨uc62] is doubly exponential. This is improved in [SVW87] to
a singly exponential construction with a quadratic exponent (i.e., if
A
has
n
states then
A
has
c
n
2states, for some constant
c >
1). In contrast, the exponent in the construction of
Proposition 4 is linear. We will come back later to the complexity of complementation.
Let us return to the issue of determinization. We now show that nondeterministic
B¨uchi automata are more expressive than deterministic B¨uchi automata. Consider the
infinitary language
= (
0
+
1
)
1
!
,i.e.,
consists of all infinite words in which 0 occurs
only finitely many times. It is easy to see that
can be defined by a nondeterministic
B¨uchi automaton. Let
A
0
= (
f
0
;
1
g
;
f
s; t
g
;
f
s
g
; ;
f
t
g
)
, where
(
s;
0
) =
(
s;
1
) =
f
s; t
g
,
(
t;
1
) =
f
t
g
and
(
t;
0
) =
;
. That is, the states are
s
and
t
with
s
the initial
state and
t
the accepting state, As long as it is in the state
s
, the automaton
A
0can read
both inputs 0 and 1. At some point, however,
A
0makes a nondeterministic transition
to the state
t
, and from that point on it can read only the input 1. It is easy to see that
=
L
!
(
A
0
)
. In contrast,
cannot be defined by any deterministic uchi automaton.
Proposition8. Let
= (
0
+
1
)
1
!
. Then there is no deterministic B ¨
uchi automaton
A
such that
=
L
!
(
A
)
.
Proof: Assume by way of contradiction that
=
L
!
(
A
)
, where
A
= (
; S;
f
s
0
g
; ; F
)
for
=
f
0
;
1
g
, and
A
is deterministic. Recall that
can be viewed as a partial mapping
from
S
to
S
.
Consider the infinite word
w
0
=
1
!
. Clearly,
w
0is accepted by
A
, so
A
has an
accepting run on
w
0. Thus,
w
0has a finite prefix
u
0such that
(
s
0
; u
0
)
2
F
. Consider
now the infinite word
w
1
=
u
001
!
. Clearly,
w
1is also accepted by
A
, so
A
has an
accepting run on
w
1. Thus,
w
1has a finite prefix
u
00
u
1such that
(
s
0
; u
00
u
1
)
2
F
. In a
similar fashion we can continue to find finite words
u
i
such that
(
s
0
; u
00
u
10
:::
0
u
i
)
2
F
. Since
S
is finite, there are
i; j
, where 0
i < j
, such that
(
s
0
; u
00
u
10
:::
0
u
i
) =
(
s
0
; u
00
u
10
:::
0
u
i
0
:::
0
u
j
)
. It follows that
A
has an accepting run on
u
00
u
10
:::
0
u
i
(
0
:::
0
u
j
)
!
:
But the latter word has infinitely many occurrences of 0, so it is not in
.
Note that the complementary language
!
= ((
0
+
1
)
0
)
!
(the set of infinite words
in which 0 occurs infinitely often) is acceptable by the deterministic uchi automaton
A
= (
f
0
;
1
g
;
f
s; t
g
;
f
s
g
; ;
f
s
g
)
,where
(
s;
0
) =
(
t;
0
) =
f
s
g
and
(
s;
1
) =
(
t;
1
) =
f
t
g
. That is, the automaton starts at the state s and then it simply remembers the
last symbol it read (
s
corresponds to 0 and
t
corresponds to 1). Thus, the use of
nondeterminism in Proposition 7 is essential.
To understand why the subset construction does not work for B ¨uchi automata, con-
siderthefollowingtwoautomata overasingleton alphabet:
A
1
= (
f
a
g
;
f
s; t
g
;
f
s
g
;
1
;
f
t
g
)
and
A
2
= (
f
a
g
;
f
s; t
g
;
f
s
g
;
2
;
f
t
g
)
, where
1
(
s; a
) =
f
s; t
g
,
1
(
t; a
) =
;
,
2
(
s; a
) =
f
s; t
g
, and
2
(
t; a
) =
f
s
g
. It is easy to see that
A
1does not accept any infinite word,
since no infinite run can visit the state
t
. In contrast,
A
2accepts the infinite word
a
!
,
since the run
(
st
)
!
is accepting. If we apply the subset construction to both automata,
then in both cases the initial state is
f
s
g
,
d
(
f
s
g
; a
) =
f
s; t
g
, and
d
(
f
s; t
g
; a
) =
f
s; t
g
.
Thus, the subset construction can not distinguish between
A
1and
A
2.
To be able to determinize automata on finite words, we have to consider a more
general acceptance condition. Let
S
be a finite nonempty set of states. A Rabin condi-
tion is a subset
G
of 2
S
2
S
, i.e., it is a collection of pairs of sets of states, written
[(
L
1
; U
1
)
;:::;
(
L
k
; U
k
)]
(we drop the external brackets when the condition consists of
a single pair). A Rabin automaton
A
is an automaton on infinite words where the accep-
tance condition is specified by a Rabin condition, i.e., it is of the form
(
; S; S
0
; ; G
)
.
Arun
r
of
A
is accepting if for some
i
we have that lim
(
r
)
\
L
i
6
=
;
and lim
(
r
)
\
U
i
=
;
,
that is, there is a pair in
G
where the left set is visited infinitely often by
r
while the
right set is visited only finitely often by
r
.
Rabin automata are not more expressive than B¨uchi automata.
Proposition9. [Cho74] Let
A
be a Rabin automaton, then there is a B¨
uchi automaton
A
b
such that
L
!
(
A
) =
L
!
(
A
b
)
.
Proof: Let
A
= (
; S; S
0
; ; G
)
, where
G
= [(
L
1
; U
1
)
;:::;
(
L
k
; U
k
)]
. It is easy to
see that
L
!
(
A
) =
[
k
i
=
1
L
!
(
A
i
)
, where
A
i
= (
; S; S
0
; ;
(
L
i
; U
i
))
. Since B¨uchi au-
tomata are closed under union, by Proposition 5, it suffices to prove the claim for Rabin
conditions that consists of a single pair, say
(
L; U
)
.
The idea of the construction is to take two copies of
A
, say
A
1and
A
2. The
B¨uchi automaton
A
b
starts in
A
1and stays there as long as it wants”. At some point it
nondeterministically makes a transition into
A
2and it stays there avoiding
U
and visiting
L
infinitely often. Formally,
A
b
= (
; S
b
; S
0
b
;
b
; L
)
, where
S
b
=
S
f
0
g [
(
S
U
)
,
S
0
b
=
S
0
f
0
g
,
b
(
s; a
) =
(
s; a
)
U
for
s
2
S
U
, and
b
(
h
s;
0
i
; a
) =
(
s; a
)
f
0
g [
(
(
s; a
)
U
)
.
Note that the construction in the proposition above is effective and polynomial in the
size of the given automaton.
If we restrict attention, however, to deterministic automata, then Rabin automata are
more expressive than B¨uchi automata. Recall the infinitary language
= (
0
+
1
)
1
!
.
We showed earlier that it is notdefinable by a deterministic uchi automaton. It is easily
definable, however, by a Rabin automaton. Let
A
= (
f
0
;
1
g
;
f
s; t
g
;
f
s
g
; ;
(
f
t
g
;
f
s
g
))
,
where
(
s;
0
) =
(
t;
0
) =
f
s
g
,
(
s;
1
) =
(
t;
1
) =
f
t
g
. That is, the automaton starts at
the state s and then it simply remembers the last symbol it read (
s
corresponds to 0 and
t
corresponds to 1). It is easy to see that
=
L
!
(
A
)
.
The additional expressive power of Rabin automata is sufficient to provide closure
under determinization.
Proposition10. [McN66] Let
A
be a B¨
uchi automaton. There is a deterministic Rabin
automaton
A
d
such that
L
!
(
A
d
) =
L
!
(
A
)
.
Proposition 10 was first proven in [McN66], where a doubly exponential construc-
tion was provided. This was improved in [Saf88], where a singly exponential, with an al-
most linear exponent, construction was provided (if
A
has
n
states,then
A
d
has2
O
(
n
log
n
)
statesand
O
(
n
)
pairs). Furthermore, it was shownin [Saf88,EJ89]) how the determiniza-
tion construction can be modified to yield a co-determinization construction, i.e., a con-
struction of a deterministic Rabin automaton
A
0
d
such that
L
!
(
A
d
) =
!
L
!
(
A
d
)
,
where
is the underlying alphabet. The co-determinization construction is also singly
exponential with an almost linear exponent (again, if
A
has
n
states, then
A
0
d
has
2
O
(
n
log
n
)
states and
O
(
n
)
pairs). Thus, combining the co-determinization construction
with the polynomial translation of Rabin automata to B¨uchi automata (Proposition 9),
we get a complementation construction whosecomplexity is singly exponential with an
almost linear exponent. This improves the previously mentioned bound on complemen-
tation (singly exponential with a quadratic exponent) and is essentially optimal [Mic88].
In contrast, complementation for automata on finite words involves an exponential blow-
up with a linear exponent (Section 2.1). Thus, complementation for automata on infinite
words is provably harder than complementation for automata on finite words. Both
constructions are exponential, but in the finite case the exponent is linear, while in the
infinite case the exponent is nonlinear.
2.3 Automata on Finite Words Algorithms
An automaton is “interesting” if it defines an “interesting” language, i.e., a language
that is neither empty nor contains all possible words. An automaton
A
is nonempty if
L
(
A
)
6
=
;
; it is nonuniversal if
L
(
A
)
6
=
. One of the most fundamental algorithmic
issues in automata theory is testing whether a given automaton is “interesting”, i.e.,
nonempty and nonuniversal. The nonemptiness problem for automata is to decide, given
an automaton
A
, whether
A
is nonempty. The nonuniversality problem for automata is
to decide, given an automaton
A
, whether
A
is nonuniversal. It turns out that testing
nonemptiness is easy, while testing nonuniversality is hard.
Proposition11. [RS59, Jon75]
1. The nonemptiness problem for automata is decidable in linear time.
2. The nonemptiness problem for automata is NLOGSPACE-complete.
Proof: Let
A
= (
; S; S
0
; ; F
)
be the given automaton. Let
s; t
be states of
S
. We say
that
t
is directly connected to
s
if there is a symbol
a
2
such that
t
2
(
s; a
)
. We
say that
t
is connected to
s
if there is a sequence
s
1
;:::;s
m
,
m
1, of states such that
s
1
=
s
,
s
n
=
t
, and
s
i
+
1is directly connected to
s
i
for 1
i < m
. Essentially,
t
is
connected to
s
if there is a path in
A
from
s
to
t
, where
A
is viewed as an edge-labeled
directed graph. Note that the edge labels are ignored in this definition. It is easy to see
that
L
(
A
)
is nonempty iff there are states
s
2
S
0and
t
2
F
such that
t
is connected
to
s
. Thus, automata nonemptiness is equivalent to graph reachability. The claims now
follow from the following observations:
1. A breadth-first-search algorithm can construct in linear time the set of all states
conncected to a state in
S
0[CLR90].
A
is nonempty iff this set intersects
F
nontrivially.
2. Graph reachability can be tested in nondeterministic logarithmic space. The al-
gorithm simply guesses a state
s
0
2
S
0, then guesses a state
s
1that is directly
connected to
s
0, then guesses a state
s
2that is directly connected to
s
1, etc., until it
reaches a state
t
2
F
. (Recall that a nondeterministic algorithm accepts if there is a
sequence of guesses that leads to acceptance. We do not care here about sequences
of guesses that do not lead to acceptance [GJ79].) At each step the algorithm needs
to remember only the current state and the next state; thus, if there are
n
states the
algorithm needs to keep in memory
O
(
log
n
)
bits, since log
n
bits suffice to describe
one state. On the other hand, graph reachability is also NLOGSPACE-hard [Jon75].
Proposition12. [MS72]
1. The nonuniversality problem for automata is decidable in exponential time.
2. The nonuniversality problem for automata is PSPACE-complete.
Proof: Note that
L
(
A
)
6
=
iff
L
(
A
)
6
=
;
iff
L
(
A
)
6
=
;
, where
A
is the
complementary automaton of
A
(see Section 2.1). Thus, to test
A
for nonuniversality,
it suffices to test
A
for nonemptiness. Recall that
A
is exponentially bigger than
A
.
Since nonemptiness can be tested in linear time, it follows that nonuniversality can be
tested in exponential time. Also, since nonemptiness can be tested in nondeterministic
logarithmic space, nonuniversality can be tested in polynomial space.
The latter argument requires some care. We cannot simply construct
A
and then test
it for nonemptiness, since
A
is exponentially big. Instead, we construct
A
“on-the-fly”;
whenever the nonemptiness algorithm wants to move from a state
t
1of
A
to a state
t
2,
the algorithm guesses
t
2and checks that it is directly connected to
t
1. Once this has been
verified, the algorithm can discard
t
1. Thus, at each step the algorithm needs to keep in
memory at most two states of
A
and there is no need to generate all of
A
at any single
step of the algorithm.
This yields a nondeterministic polynomial space algorithm. To eliminate nonde-
terminism, we appeal to a well-known theorem of Savitch [Sav70] which states that
N S P AC E
(
f
(
n
))
DSP AC E
(
f
(
n
)
2
)
, for
f
(
n
)
log
n
; that is, any nondetermin-
istic algorithm that uses at least logarithmic space can be simulated by a determin-
istic algorithm that uses at most quadratically larger amount of space. In particular,
any nondeterministic polynomial-space algorithm can be simulated by a deterministic
polynomial-space algorithm.
To prove PSPACE-hardness, it can be shown that any PSPACE-hard problem can be
reduced to the nonuniversality problem. That is, there is a logarithmic-space algorithm
that given a polynomial-space-bounded Turing machine
M
and a word
w
outputs an
automaton
A
M;w
such that
M
accepts
w
iff
A
M;w
is non-universal [MS72, HU79].
2.4 Automata on Infinite Words Algorithms
The results for B¨uchi automata are analogous to the results in Section 2.3.
Proposition13.
1. [EL85b, EL85a] The nonemptiness problem for B¨
uchi automata is decidable in
linear time.
2. [VW94] The nonemptiness problem for B ¨
uchi automata is NLOGSPACE-complete.
Proof: Let
A
= (
; S; S
0
; ; F
)
be the given automaton. We claim that
L
!
(
A
)
is
nonempty iff there are states
s
0
2
S
0and
t
2
F
such that
t
is connected to
s
0and
t
is
connected to itself. Suppose first that
L
!
(
A
)
is nonempty. Then there is an accepting
run
r
=
s
0
; s
1
;:::
of
A
on some input word. Clearly,
s
i
+
1is directly connected to
s
i
for all
i
0. Thus,
s
j
is connected to
s
i
whenever
i < j
. Since
r
is accepting, some
t
2
F
occurs in
r
infinitely often; in particular, there are
i; j
, where 0
< i < j
, such that
t
=
s
i
=
s
j
. Thus,
t
is connected to
s
0
2
S
0and
t
is also connected to itself. Conversely,
suppose that there are states
s
0
2
S
0and
t
2
F
such that
t
is connected to
s
0and
t
is
connected to itself. Since
t
is connected to
s
0, there are asequence of states
s
1
;:::;s
k
and
a sequence of symbols
a
1
;:::;a
k
such that
s
k
=
t
and
s
i
2
(
s
i
1
; a
i
)
for 1
i
k
.
Similarly, since
t
is connected to itself, there are a sequence of states
t
0
; t
1
;:::;t
l
and a sequence of symbols
b
1
;:::;b
l
such that
t
0
=
t
k
=
t
and
t
i
2
(
t
i
1
; b
i
)
for
1
i
l
. Thus,
(
s
0
; s
1
;:::;s
k
1
)(
t
0
; t
1
;:::;t
l
1
)
!
is an accepting run of
A
on
(
a
1
;:::;a
k
)(
b
1
;:::;b
l
)
!
, so
A
is nonempty.
Thus, B¨uchi automata nonemptiness is also reducible to graph reachability.
1. A depth-first-search algorithm can construct a decomposition of the graph into
strongly connected components [CLR90].
A
is nonempty iff from a component that
intersects
S
0nontrivially it is possible to reach a nontrivial component that intersects
F
nontrivially. (A strongly connected component is nontrivial if it contains an edge,
which means, since it is strongly connected, that it contains a cycle).
2. The algorithm simply guesses a state
s
0
2
S
0, then guesses a state
s
1that is directly
connected to
s
0, then guesses a state
s
2that is directly connected to
s
1, etc., until it
reaches a state
t
2
F
. At that point the algorithm remembers
t
and it continues to
move nondeterministically from a state
s
to a state
s
0
that is directly connected to
s
until it reaches
t
again. Clearly, the algorithm needs only a logarithmic memory,
since it needs to remember at most a description of three states at each step.
NLOGSPACE-hardness followsfrom NLOGSPACE-hardness ofnonemptiness for
automata on finite words.
Proposition14. [SVW87]
1. The nonuniversality problem for B ¨
uchi automata is decidable in exponential time.
2. The nonuniversality problem for B ¨
uchi automata is PSPACE-complete.
Proof: Again
L
!
(
A
)
6
=
!
iff
!
L
!
(
A
)
6
=
;
iff
L
!
(
A
)
6
=
;
, where
A
is the
complementary automaton of
A
(see Section 2.2). Thus, to test
A
for nonuniversality,
it suffices to test
A
for nonemptiness. Since
A
is exponentially bigger than
A
and
nonemptiness can be tested in linear time, it follows that nonuniversality can be tested in
exponential time. Also, since nonemptiness can be tested in nondeterministic logarithmic
space, nonuniversalitycan be tested in polynomial space. Again, the polynomial-space
algorithm constructs
A
“on-the-fly”.
PSPACE-hardness follows easily from the PSPACE-hardness of the universality
problem for automata on finite words [Wol82].
2.5 Automata on Finite Words Alternation
Nondeterminism gives a computing device the power of existential choice. Its dual gives
a computing device the power of universal choice. (Compare this to the complexity
classes NP and co-NP [GJ79]). It is therefore natural to consider computing devices that
have the power of both existential choice and universal choice. Such devices are called
alternating. Alternation was studied in [CKS81] in the context of Turing machines
and in [BL80, CKS81] for finite automata. The alternation formalisms in [BL80] and
[CKS81] are different, though equivalent. We follow here the formalism of [BL80].
For a given set
X
, let
B
+
(
X
)
be the set of positive Boolean formulas over
X
(i.e.,
Boolean formulas built from elements in
X
using
^
and
_
), where we also allow the
formulas true and false. Let
Y
X
. We say that
Y
satisfies a formula
2 B
+
(
X
)
if the truth assignment that assigns true to the members of
Y
and assigns false to the
members of
X
Y
satisfes
. For example, the sets
f
s
1
; s
3
g
and
f
s
1
; s
4
g
both satisfy
the formula
(
s
1
_
s
2
)
^
(
s
3
_
s
4
)
, while the set
f
s
1
; s
2
g
does not satisfy this formula.
Consider a nondeterministic automaton
A
= (
; S; S
0
; ; F
)
. The transition func-
tion
maps a state
s
2
S
and an input symbol
a
2
to a set of states. Each element
in this set is a possible nondeterministic choice for the automaton’s next state. We
can represent
using
B
+
(
S
)
; for example,
(
s; a
) =
f
s
1
; s
2
; s
3
g
can be written as
(
s; a
) =
s
1
_
s
2
_
s
3. In alternating automata,
(
s; a
)
can be an arbitrary formula from
B
+
(
S
)
. We can have, for instance, a transition
(
s; a
) = (
s
1
^
s
2
)
_
(
s
3
^
s
4
)
;
meaning that the automaton accepts the word
aw
, where
a
is a symbol and
w
is a word,
when it is in the state
s
, if it accepts the word
w
from both
s
1and
s
2or from both
s
3and
s
4. Thus, such a transition combines the features of existential choice (the disjunction
in the formula) and universal choice (the conjunctions in the formula).
Formally, an alternating automaton is a tuple
A
= (
; S; s
0
; ; F
)
, where
is
a finite nonempty alphabet,
S
is a finite nonempty set of states,
s
0
2
S
is the initial
state (notice that we have a unique initial state),
F
is a set of accepting states, and
:
S
! B
+
(
S
)
is a transition function.
Because of the universal choice in alternating transitions, a run of an alternating
automaton is a tree rather than a sequence. A tree is a (finite or infinite) connected
directed graph, with one node designated as the root and denoted by
"
, and in which
every non-root node has a unique parent (
s
is the parent of
t
and
t
is a child of
s
if there
is an edge from
s
to
t
) and the root
"
has no parent. The level of a node
x
, denoted
j
x
j
,
is its distance from the root
"
; in particular,
j
"
j
=
0. A branch
=
x
0
; x
1
;:::
of a tree
is a maximal sequence of nodes such that
x
0is the root
"
and
x
i
is the parent of
x
i
+
1
for all
i >
0. Note that
can be finite or infinite. A
-labeled tree, for a finite alphabet
, is a pair
(
;
T
)
, where
is a tree and
T
is a mapping from
nodes
(
)
to
that
assigns to every node of
a label in
. We often refer to
T
as the labeled tree. A branch
=
x
0
; x
1
;:::
of
T
defines an infinite word
T
(
) =
T
(
x
0
)
;
T
(
x
1
)
;:::
consisting of
the sequence of labels along the branch.
Formally, a run of
A
on a finite word
w
=
a
0
; a
1
;:::;a
n
1is a finite
S
-labeled tree
r
such that
r
(
"
) =
s
0and the following holds:
if
j
x
j
=
i < n
,
r
(
x
) =
s
, and
(
s; a
i
) =
, then
x
has
k
children
x
1
;:::;x
k
,
for some
k
j
S
j
, and
f
r
(
x
1
)
;:::;r
(
x
k
)
g
satisfies
.
For example, if
(
s
0
; a
0
)
is
(
s
1
_
s
2
)
^
(
s
3
_
s
4
)
, then the nodes of the run tree at level 1
include the label
s
1or the label
s
2and also include the label
s
3or the label
s
4. Note that
the depth of
r
(i.e., the maximal level of a node in
r
) is at most
n
, but not all branches
need to reach such depth, since if
(
r
(
x
)
; a
i
) =
true, then
x
does not need to have
any children. On the other hand, if
j
x
j
=
i < n
and
r
(
x
) =
s
, then we cannot have
(
s; a
i
) =
false, since false is not satisfiable.
The run tree
r
is accepting if all nodes at depth
n
are labeled by states in
F
. Thus,
a branch in an accepting run has to hit the true transition or hit an accepting state after
reading all the input word.
What is the relationship between alternating automata and nondeterministic au-
tomata? It turns out that just as nondeterministic automata have the same expressive
power as deterministic automata but they are exponentially more succinct, alternating
automata have the same expressive power as nondeterministic automata but they are
exponentially more succinct.
We first show that alternating automata are at least as expressive and as succinct as
nondeterministic automata.
Proposition15. [BL80, CKS81, Lei81] Let
A
be a nondeterministic automaton. Then
there is an alternating automaton
A
n
such that
L
(
A
a
) =
L
(
A
)
.
Proof: Let
A
= (
; S; S
0
; ; F
)
. Then
A
a
= (
; S
[ f
s
0
g
; s
0
;
a
; F
)
, where
s
0is a
new state, and
a
is defined as follows, for
b
2
and
s
2
S
:
a
(
s
0
; b
) =
W
t
2
S
0
;t
0
2
(
t;b
)
t
0
,
a
(
s; b
) =
W
t
2
(
s;b
)
t
.
(We take an empty disjunction in the definition of
a
to be equivalent to false.) Es-
sentially, the transitions of
A
are viewed as disjunctions in
A
a
. A special treatment is
needed for the initial state, since we allow a set of initial states in nondeterministic
automata, but only a single initial state in alternating automata.
Note that
A
a
has essentially the same size as
A
; that is, the descriptions of
A
a
and
A
have the same length.
We now show that alternating automata are not more expressive than nondetermin-
istic automata.
Proposition16. [BL80, CKS81, Lei81] Let
A
be an alternating automaton. Then there
is a nondeterministic automaton
A
n
such that
L
(
A
n
) =
L
(
A
)
.
Proof: Let
A
= (
; S; s
0
; ; F
)
. Then
A
n
= (
; S
n
;
ff
s
0
gg
;
n
; F
n
)
, where
S
n
=
2
S
,
F
n
=
2
F
, and
n
(
T; a
) =
f
T
0
j
T
0
satisfies
^
t
2
T
(
t; a
)
g
:
(We take an empty conjunction in the definition of
n
to be equivalent to true; thus,
; 2
n
(
;
; a
)
.)
Intuitively,
A
n
guesses a run tree of
A
. At a given point of a run of
A
n
, it keeps in its
memory a whole level of the run tree of
A
. As it reads the next input symbol, it guesses
the next level of the run tree of
A
.
The translation from alternating automata to nondeterministic automata involves an
exponential blow-up. As shown in [BL80, CKS81, Lei81], this blow-up is unavoidable.
For example, fix some
n
1, and let
=
f
a; b
g
. Let
n
be the set of all words that
have two different symbols at distance
n
from each other. That is,
n
=
f
uavbw
j
u; w
2
and
v
2
n
1
g [ f
ubvaw
j
u; w
2
and
v
2
n
1
g
:
It is easy to see that
n
is accepted by the nondeterministic automaton
A
= (
;
f
p; q
g [
f
1
;:::;n
g
;
f
p
g
; ;
f
q
g
)
,where
(
p; a
) =
f
p;
h
1
; a
ig
,
(
p; b
) =
f
p;
h
1
; b
ig
,
(
h
a; i
i
; x
) =
fh
a; i
+
1
ig
and
(
h
b; i
i
; x
) =
fh
b; i
+
1
ig
for
x
2
and 0
< i < n
,
(
h
a; n
i
; a
) =
;
,
(
h
a; n
i
; b
) =
f
q
g
,
(
h
b; n
i
; b
) =
;
,
(
h
b; n
i
; a
) =
f
q
g
, and
(
q; x
) =
f
q
g
for
x
2
.
Intuitively,
A
guesses a position inthe input word, reads the input symbolat that position,
moves
n
positions to the right, and checks that it contains a different symbol. Note that
A
has 2
n
+
2 states. By Propositions 15 and 17 (below), there is an alternating automaton
A
a
with 2
n
+
3 states that accepts the complementary language
n
=
n
.
Suppose that we have a nondeterministicautomaton
A
nd
= (
; S; S
0
;
nd
; F
)
with
fewer than 2
n
states that accepts
n
. Thus,
A
n
accepts all words
ww
, where
w
2
n
.
Let
s
0
w
;:::;s
2
n
w
an accepting run of
A
nd
on
ww
. Since
j
S
j
<
2
n
, there are two distinct
word
u; v
2
n
such that
s
n
u
=
s
n
v
. Thus,
s
0
u
;:::;s
n
u
; s
n
+
1
v
;:::;s
2
n
v
is an accepting run
of
A
nd
on
uv
, but
uv
62
n
since it must have two different symbols at distance
n
from
each other.
One advantage of alternating automata is that it is easy to complement them. We
first need to define the dual operation on formulas in
B
+
(
X
)
. Intuitively, the dual
of a
formula
is obtained from
by switching
_
and
^
, and by switching true and false. For
example,
x
_
(
y
^
z
) =
x
^
(
y
_
z
)
. (Note that we are considering formulas in
B
+
(
X
)
,
so we cannot simply apply negation to these formulas.) Formally, we define the dual
operation as follows:
x
=
x
, for
x
2
X
,
true
=
false,
false
=
true,
(
^
) = (
_
)
, and
(
_
) = (
^
)
.
Suppose now that we are given an alternating automaton
A
= (
; S; s
0
; ; F
)
.
Define
A
= (
; S; s
0
; ; S
F
)
, where
(
s; a
) =
(
s; a
)
for all
s
2
S
and
a
2
. That
is,
is the dualized transition function.
Proposition17. [BL80, CKS81, Lei81] Let
A
be an alternating automaton. Then
L
(
A
) =
L
(
A
)
.
By combining Propositions 11 and 16, we can obtain a nonemptiness test for alter-
nating automata.
Proposition18. [CKS81]
1. The nonemptiness problem for alternating automata is decidable in exponential
time.
2. The nonemptiness problem for alternating automata is PSPACE-complete.
Proof: All that remains to be shown is the PSPACE-hardness of nonemptiness. Recall
that PSPACE-hardness of nonuniversality was shown in Proposition 12 by a generic
reduction. That is, there is a logarithmic-space algorithm that given a polynomial-space-
bounded Turing machine
M
and a word
w
outputs an automaton
A
M;w
such that
M
accepts
w
iff
A
M;w
is nonuniversal. By Proposition 15, there is an alternating automaton
A
a
such that
L
(
A
a
) =
L
(
A
M;w
)
and
A
a
has the same size as
A
M;w
. By Proposition 17,
L
(
A
a
) =
L
(
A
a
)
. Thus,
A
M;w
is nonuniversal iff
A
a
is nonempty.
2.6 Automata on Infinite Words - Alternation
We saw earlier that a nondeterministicautomaton can be viewed bothas an automaton on
finite words and as an automaton on infinite words. Similarly, an alternating automaton
can also be viewed as an automaton on infinite words, in which case it is called an
alternating B ¨
uchi automaton [MS87].
Let
A
= (
; S; s
0
; ; F
)
be an alternating uchi automaton. A run of
A
on an
infinite word
w
=
a
0
; a
1
;:::
is a (possibly infinite)
S
-labeled tree
r
such that
r
(
"
) =
s
0
and the following holds:
if
j
x
j
=
i
,
r
(
x
) =
s
, and
(
s; a
i
) =
, then
x
has
k
children
x
1
;:::;x
k
, for
some
k
j
S
j
, and
f
r
(
x
1
)
;:::;r
(
xk
)
g
satisfies
.
The run
r
is accepting if every infinite branch in
r
includes infinitely many labels in
F
.
Note that the run can also have finite branches; if
j
x
j
=
i
,
r
(
x
) =
s
, and
(
s; a
i
) =
true
,
then
x
does not need to have any children.
We with alternating automata, alternating B¨uchi automata are as expressive as non-
deterministic uchi automata. We first show that alternating automata are at least as
expressive and as succinct as nondeterministic automata. The proof of the following
proposition is identical to the proof of Proposition 19.
Proposition19. [MS87] Let
A
be a nondeterministic B¨
uchi automaton. Then there is
an alternating B ¨
uchi automaton
A
a
such that
L
!
(
A
a
) =
L
!
(
A
)
.
As the reader may expect by now, alternating uchi automata are not more expressive
than nondeterministic uchi automata. The proof of this fact, however, ismore involved
than the proof in the finite-word case.
Proposition20. [MH84] Let
A
be an alternating B¨
uchi automaton. Then there is a
nondeterministic B ¨
uchi automaton
A
n
such that
L
!
(
A
n
) =
L
!
(
A
)
.
Proof: As in the finite-word case,
A
n
guesses a run of
A
. At a given point of a run of
A
n
, it keeps in its memory a whole level of the run of
A
(which is a tree). As it reads the
next input symbol, it guesses the next level of the run tree of
A
. The nondeterministic
automaton, however, also hasto keep information about occurrences of accepting states
in order to make sure that every infinite branch hits accepting states infinitely often. To
that end,
A
n
partitions every level of the run of
A
into two sets to distinguish between
branches that hit
F
recently and branches that did not hit
F
recently.
Let
A
= (
; S; s
0
; ; F
)
. Then
A
n
= (
; S
n
; S
0
;
n
; F
n
)
, where
S
n
=
2
S
2
S
(i.e., each state is a pair of sets of states of
A
),
S
0
=
f
(
f
s
0
g
;
;
)
g
(i.e., the single initial
state is pair consisting of the singleton set
f
s
0
g
and the empty set),
F
n
=
f;g
2
S
, and
for
U
6
=
;
,
n
((
U; V
)
; a
) =
f
(
U
0
; V
0
)
j
there exist
X; Y
S
such that
X
satisfies
V
t
2
U
(
t; a
)
;
Y
satisfies
V
t
2
V
(
t; a
)
;
U
0
=
X
F;
and
V
0
=
Y
[
(
X
\
F
)
g
;
n
((
;
; V
)
; a
) =
f
(
U
0
; V
0
)
j
there exists
Y
S
such that
Y
satisfies
V
t
2
V
(
t; a
)
;
U
0
=
Y
F;
and
V
0
=
Y
\
F
g
:
The proof that this construction is correct requires a careful analysis of accepting
runs of
A
.
An important feature of this construction is that the blowup is exponential.
While complementation of alternating automata is easy (Proposition 17), this is not
the case foralternating uchi automata. Here we runinto the same difficultythat we ran
into in Section 2.2: not going infinitely often through accepting states is not the same as
going infinitely often through non-accepting states. >From Propositions 7, 19 and 20.
it follows that alternating B ¨uchi automata are closed under complement, but the precise
complexity of complementation in this case is not known.
Finally, by combining Propositions 13 and 20, we can obtain a nonemptiness test
for alternating B¨uchi automata.
Proposition21.
1. The nonemptiness problem for alternating B¨
uchi automata is decidable in exponen-
tial time.
2. The nonemptiness problem for alternating B¨
uchi automata is PSPACE-complete.
Proof: All that remains to be shown is the PSPACE-hardness of nonemptiness. We show
that the nonemptiness problem for alternating automata is reducible to the nonemptiness
problem for alternating B¨uchi automata. Let
A
= (
; S; s
0
; ; F
)
be an alternating
automaton. Consider the alternating B¨uchi automaton
A
0
= (
; S; s
0
;
0
;
;
)
, where
0
(
s; a
) =
(
s; a
)
for
s
2
S
F
and
a
2
, and
0
(
s; a
) =
true for
s
2
F
and
a
2
.
We claim that
L
(
A
)
6
=
;
iff
L
!
(
A
0
)
6
=
;
. Suppose first that
w
2
L
(
A
)
for some
w
2
. Then there is an accepting run
r
of
A
on
w
. But then
r
is also an accepting
run of
A
0
on
wu
for all
u
2
!
, because
0
(
s; a
) =
true for
s
2
F
and
a
2
, so
wu
2
L
!
(
A
0
)
. Suppose, on the other hand, that
w
2
L
!
(
A
)
for some
w
2
!
. Then
there is an accepting run
r
of
A
0
on
w
. Since
A
0
has no accepting state,
r
cannot have
infinite branches, so by K¨onig’s Lemma it must be finite. Thus, there is a finite prefix
u
of
w
such that
r
is an accepting run of
A
on
u
, so
u
2
L
(
A
)
.
3 Linear Temporal Logic and Automata on Infinite Words
Formulas of linear-time propositional temporal logic (LTL) are built from a set
P r op
of atomic propositions and are closed under the application of Boolean connectives,
the unary temporal connective
X
(next), and the binary temporal connective
U
(until)
[Pnu77, GPSS80]. LTL is interpreted over computations. A computation is a function
:
N
!
2
P r op
, which assigns truth values to the elements of
P r op
at each time instant
(natural number). For a computation
and a point
i
2
!
, we have that:
; i
j
=
p
for
p
2
P r op
iff
p
2
(
i
)
.
; i
j
=
^
iff
; i
j
=
and
; i
j
=
.
; i
j
=
:
'
iff not
; i
j
=
'
; i
j
=
X'
iff
; i
+
1
j
=
'
.
; i
j
=
U
iff for some
j
i
, we have
; j
j
=
and for all k,
i
k < j
, we have
; k
j
=
.
Thus, the formula
true
U '
, abbreviated as
F '
, says that
'
holds eventually, and
the formula
:
F
:
'
, abbreviated
G'
, says that
'
holds henceforth. For example, the
formula
G
(
:
request
_
(
request
U
grant
))
says that whenever a request is made it holds
continuously until it is eventually granted. We will say that
satisfies a formula
'
,
denoted
j
=
'
, iff
;
0
j
=
'
.
Computations can also be viewed as infinite words over the alphabet 2
P r op
. We shall
see that the set of computations satisfying a given formula are exactly those accepted
by some finite automaton on infinite words. This fact was proven first in [SPH84]. The
proof there is by induction on structure of formulas. Unfortunately, certain inductive
steps involve an exponential blow-up (e.g., negation corresponds to complementation,
which we have seen to be exponential). As a result, the complexity of that translation
is nonelementary, i.e., it may involve an unbounded stack of exponentials (that is, the
complexity bound is of the form
2
:
:
:
2
n
;
where the height of the stack is
n
.)
The following theorem establishesa very simpletranslation between LTL and alter-
nating B¨uchi automata.
Theorem 22. [MSS88, Var94] Given an LTL formula
'
, one can build an alternating
B¨
uchi automaton
A
'
= (
; S; s
0
; ; F
)
, where
=
2
P r op
and
j
S
j
is in
O
(
j
'
j
)
, such
that
L
!
(
A
'
)
is exactly the set of computations satisfying the formula
'
.
Proof:The set
S
of states consists of all subformulas of
'
and their negation (we identify
the formula
::
with
). The initial state
s
0is
'
itself. The set
F
of accepting states
consists of all formulas in
S
of the form
:
(
U
)
. It remains to define the transition
function
.
In this construction, we use a variation of the notion of dual that we used in Sec-
tion 2.5. Here, the dual
of a formula is obtained from
by switching
_
and
^
,
by switching true and false, and, in addition, by negating subformulas in
S
, e.g.,
:
p
_
(
q
^
Xq
)
is
p
^
(
:
q
_ :
Xq
)
. More formally,
=
:
, for
2
S
,
true
=
false,
false
=
true,
(
^
) = (
_
)
, and
(
_
) = (
^
)
.
We can now define
:
(
p; a
) =
true if
p
2
a
,
(
p; a
) =
false if
p
62
a
,
(
^
; a
) =
(
; a
)
^
(
; a
)
,
(
:
; a
) =
(
; a
)
,
(
X ; a
) =
,
(
U ; a
) =
(
; a
)
_
(
(
; a
)
^
U
)
.
Note that
(
; a
)
is defined by induction on the structure of
.
Consider now a run
r
of
A
'
. It is easy to see that
r
can have two types of infinite
branches. Each infinite branch is labeled from some point on by a formula of the form
U
or by a formula of the form
:
(
U
)
. Since
(
:
(
U
)
; a
) =
(
; a
)
^
(
(
; a
)
_
:
(
U
))
, an infinite branch labeled from some point by
:
(
U
)
ensures that
U
indeed fails at that point, since
fails from that point on. On the other hand, an infinite
branch labeled from some point by
U
does not ensure that
U
holds at that point,
since it does not ensure that
eventually holds. Thus, while we should allow infinite
branches labeled by
:
(
U
)
, we should not allow infinite branches labeled by
U
.
This is why we defined
F
to consists of all formulas in
S
of the form
:
(
U
)
.
Example 1. Consider the formula
'
= (
X
:
p
)
U q
. The alternating B¨uchi automaton
associated with
'
is
A
'
= (
2
f
p;q
g
;
f
';
:
'; X
:
p;
:
X
:
p;
:
p; p; q;
:
q
g
; '; ;
f:
'
g
)
,
where
is described in the following table.
s
(
s;
f
p; q
g
)
(
s;
f
p
g
)
(
s;
f
q
g
)
(
s;
;
)
'
true
:
p
^
'
true
:
p
^
'
:
'
false
p
_ :
'
false
p
_ :
'
X
:
p
:
p
:
p
:
p
:
p
:
X
:
p p p p p
:
p
false false true true
p
true true false false
q
true false true false
:
q
false true false true
In the state
'
, if
q
does not hold in the present state, then
A
'
requires both
X
:
p
to
be satisfied in the present state (that is,
:
p
has to be satisfied in next state), and
'
to be
satisfied in the next state. As
'
62
F
,
A
'
should eventually reach a state that satisfies
q
.
Note that many of the states, e.g., the subformulas
X
:
p
and
q
, are not reachable; i.e.,
they do not appear in any run of
A
'
.
By applying Proposition 20, we now get:
Corollary23. [VW94] Given an LTL formula
'
, one can build a B¨
uchi automaton
A
'
= (
; S; S
0
; ; F
)
, where
=
2
P r op
and
j
S
j
is in 2
O
(
j
'
j
)
, such that
L
!
(
A
'
)
is
exactly the set of computations satisfying the formula
'
.
The proof of Corollary 23 in [VW94] is direct and does not go through alternating
B¨uchi automata. The advantage of the proof here is that it separates the logic from
the combinatorics. Theorem 22 handles the logic, while Proposition 20 handles the
combinatorics.
Example 2. Consider the formula
'
=
F Gp
, which requires
p
to hold from some point
on. The B¨uchi automaton associated with
'
is
A
'
= (
2
f
p
g
;
f
0
;
1
g
;
f
0
g
; ;
f
1
g