Content uploaded by Derek Long
Author content
All content in this area was uploaded by Derek Long on Jun 12, 2014
Content may be subject to copyright.
Extending TIM domain analysis to handle ADL constructs
Stephen Cresswell, Maria Fox and Derek Long
Department of Computer Science, University of Durham, UK.
{s.n.cresswell,maria.fox,d.p.long}@durham.ac.uk
Abstract
Planning domain analysis provides information which
is useful to the domain designer, and which can also
be exploited by a planner to reduce search. The TIM
domain analysis tool infers types and invariants from
an input domain definition and initial state. In this
paper we describe extensions to the TIM system to al-
low efficient processing of domains written in a more
expressive language with features of ADL: types, con-
ditional effects, universally quantified effects and neg-
ative preconditions.
Introduction
The analysis of a planning domain can reveal its im-
plicit type structure and various kinds of state invari-
ants. This information can be used by domain designer
to check the consistency of the domain and reveal bugs
in the encoding. It has also been successfully exploited
in speeding up planning algorithms by allowing incon-
sistent states to be eliminated (Fox & Long 2000), by
revealing hidden structure that can be solved by a spe-
cialised solver (Long & Fox 2000), or as a basis for or-
dering goals (Porteous, Sebastia, & Hoffmann 2001).
Many other researchers are also interested in domain
analysis. Planners based on propositional satisfiabil-
ity (Kautz & Selman 1998) and CSP (van Beek &
Chen 1999) often require hand-coded domain knowl-
edge, much of which could be derived from domain
analysis. TLPlan (Bacchus & Kabanza 2000) and
TALPlanner (Doherty & Kvarnstrom 1999) rely on con-
trol knowledge which might be inferrable from the un-
derlying structures describing the behaviour of the do-
main. Our analysis produces not just invariants, but
the underlying behavioural models from which they can
be produced. These models provide a basis for further
analysis, giving an advantage over other invariant al-
gorithms, such as DISCOPLAN (Gerevini & Schubert
1998), which do not produce these structures.
The TIM system performs its analysis by construct-
ing from the planning operators a set of finite state
machines (FSMs) describing all the transitions possible
Copyright c
°2002, American Association for Artificial In-
telligence (www.aaai.org). All rights reserved.
for single objects in the domain. The type structure
and domain invariants are derived from analysis of the
FSMs.
Earlier versions of TIM have accepted planning prob-
lems expressed only in the basic language of STRIPS.
A more expressive language for describing planning do-
mains using features derived from ADL (Pednault 1989)
is in widespread use. The use of these extensions makes
life easier for the domain designer, but it is difficult to
handle them efficiently in planning systems.
In this paper we describe work to extend the lan-
guage handled by TIM to include the main features of
ADL, whilst also attempting to preserve the efficiency
of TIM processing. The features we describe here are
types, conditional effects, universally quantified effects
and negative preconditions.
For example, in the briefcase domain, consider the
move operator, shown below. The operator includes
a universally quantified conditional effect which says
that when the briefcase is moved between locations, all
the portable objects which are inside the briefcase also
change their location. 1
(:action move
:parameters (?m ?l - location)
:precondition (is-at ?m)
:effect (and (is-at ?l)
(not (is-at ?m))
(forall (?x - portable)
(when (in ?x)
(and (at ?x ?l)
(not (at ?x ?m)))))))
A typical invariant that we would like to get from
this domain is that portable objects are at exactly one
location:
∀x: portable · ∀y· ∀z·at(x, y)∧at(x, z)→y=z
∀x: portable ·(∃y: location ·at(x, y ))
TIM
For a full account of the TIM processing, see (Fox &
Long 1998). Here we briefly summarise those parts of
the algorithm which are relevant to the extension to
ADL.
1This encoding of the domain is from IPP problem set.
We are concerned with extracting state descriptions
for individual objects. These are described using prop-
erties, which describe a predicate and an argument po-
sition in which the object occurs. For example, if we
have at(plane27, durham) in the initial state, then we
consider the object plane27 to have property at1, and
the object durham to have property at2.
To explain the processing of domain operators by
TIM, we must introduce two kinds of derived struc-
ture — the property relating structure (PRS) and the
transition rule.
The PRS is derived from an operator, and records
the properties forming preconditions, add effects and
delete effects for a single parameter of an operator. For
example, consider the operator fly:
(:action fly
:parameters (?p ?from ?to)
:precondition (and (at ?p ?from) (fuelled ?p) (loc ?to))
:effect (and (not (at ?p ?from))
(at ?p ?to)
(not (fuelled ?p))
(unfuelled ?p)))
The parameters of the operator give us the following
PRSs:
?p
pre: at1,fuelled1
add: at1,unfuelled1
del: at1,fuelled1
?from
pre: at2
add:
del: at2
?to
pre: loc1
add: at2
del:
Atransition rule is derived from a PRS, and describes
properties exchanged. Transition rules are of the form:
Enablers ⇒Start →F inish
where Enablers is a bag of precondition properties
which are not deleted, Start is a bag of precondition
properties which are deleted, and F inish is a bag of
properties which are added. Empty bags are shown as
null, or may be omitted in the case of enablers. The
transition rules corresponding to the above PRSs are:
fuelled1⇒at1→at1
at1⇒fuelled1→unfuelled1
at2→null
loc1⇒null →at2
The part of the TIM processing that we discuss in the
rest of the paper consists of the following main stages:
1. Construct PRSs from operators.
2. Construct transition rules from PRSs.
3. Unite the properties in the start and finish parts
of transition rules, to group properties into prop-
erty and attribute spaces. Property spaces arise
where transition rules define a finite number of pos-
sible states (as the rules involving at1,fuelled1, and
unfuelled1) — these define an FSM. Attribute spaces
arise where transition rules allow properties to be
gained without cost (as for at2) — these spaces must
be treated in a separate way, as it is not possible to
enumerate all of their possible states.
4. Project the propositions from the initial state of the
planning problem to form bags of properties repre-
senting the initial states for each state space.
5. Extend the states in each space by application of
transition rules.
This process derives information that can be used
to extract types and invariants. Each unique pattern of
membership of property and attribute spaces for the do-
main objects defines a type. From the generated FSMs,
domain invariants can be extracted.
ADL extensions to TIM
The original version of TIM (Fox & Long 1998) deals
only with untyped STRIPS domains. We are interested
in extension to a subset of ADL, and our extensions fall
into four areas:
•Types
•Universally quantified effects
•Conditional effects
•Negative preconditions
A generic means of transforming ADL domain de-
scriptions into simple STRIPS representations has been
given by Gazen and Knoblock (Gazen & Knoblock
1997). Although it would be possible to apply this di-
rectly as a preprocessing stage to TIM, an undesirable
feature of this approach is that several stages of the
processing may lead to an exponential blow-up in the
size of domain descriptions or initial state descriptions.
In particular, universally quantified effects lead to an
expansion in the size of the operator proportional to the
number of domain objects that can match the quanti-
fied variable. Conditional effects lead to generation of
multiple operators, with one operator for each possible
combination of secondary conditions.
In the following work, we seek to avoid the combi-
natorial blow-up with a combination of simplifying as-
sumptions, and by avoiding operator expansions which
are irrelevant to the TIM processing.
TIM analysis only needs to work with structures de-
scribing what transitions are available to individual ob-
jects, and what states they can reach. The transitions
generated must capture all possible transitions, and
generate all possible states of individual objects. The
transitions are only partial descriptions of operators,
and the object states are only partial descriptions of a
planning state. This representation does not preserve
or make use of all the conditions present in the original
operator, and could not be correctly used in a planner.
Hence, some of the information which is painstakingly
preserved in the Gazen-Knoblock expansion (e.g. in-
stantiation of forall effects) makes no difference to
the resulting TIM analysis, and can be avoided.
Now we consider the processing of each language fea-
ture in more detail. In most cases, we describe the
conversion by showing a transformation on the opera-
tor and domain description before the standard TIM
analysis is performed on the modified descriptions.
In general, the transformations carried out on opera-
tor descriptions will not yield operators that are correct
for use in a planner. They are correct for the TIM anal-
ysis, because they yield all possible legal transitions for
objects in the domain.
Types
Part of the processing performed by TIM is to derive a
system of types for the domain. Clearly, if types have
already been specified in the domain, these should be
taken into account. Type restrictions on the parameters
of an operator are transformed into additional static
preconditions on the operators.
Types declared for the objects in the domain also give
rise to additional propositions in the initial state. Note
that since PDDL allows for a hierarchy of types, each
object must have a proposition for every type of which
it is a member.
The predicates which are automatically introduced
to discriminate will be recognised as static conditions
by TIM, and will then be taken into account when de-
termining the derived types inferred by TIM. Hence, we
can be sure that TIM types will always be at at least
as discriminating as the declared types.
Conditional Effects
Treatment of conditional effects is rather more challeng-
ing, but offers far more scope for interesting results from
the analysis. Consider the following example operator:
(:action A
:parameters (?x)
:precondition (p ?x)
:effect (and (not (p ?x)) (q ?x)
(when (r ?x) (and (not (r ?x)) (s ?x)))))
From this operator we would like to infer (assuming
no other operators affect the situation) that parameter
?x can make transitions p1→q1and p1⇒r1→s1.
This would enable us to identify two potential state
spaces and consequent invariants: pand qare mutex
properties and sand rare mutex properties.
In considering treatment of conditional effects, sev-
eral proposals were examined and these are discussed
in the following sub-sections.
Separated dependent effects One technique by
which we hoped to harness the power of existing TIM
analysis was to construct a collection of standard tran-
sition rules that capture the behaviour encoded in con-
ditional effects. The previous example shows that
this is possible in certain cases. The first proposal
for achieving this was based assuming all (when ...
...) clauses in an operator to be mutually exclusive, in
which case there is no problem with exponential blow-
up. Under this assumption, we can generate a sepa-
rate pseudo-operator for each conditional effect. The
pseudo-operator has the primary preconditions and ef-
fects from the original operator, and the when clause is
absorbed by merging its preconditions into the primary
preconditions, and its effects into the primary effects.
We also create an operator with only the primary ef-
fects. These operators are pseudo-operators because
they do not always encode the same behaviour as the
original operators and could not be used for planning.
However, this does not prevent them from being used
to generate valid transition structures that reflect the
transitions described by the original conditional effects
operators.
Unfortunately, conditional effects do not always sat-
isfy the assumption that at most one conditional effect
is triggered during a single application of an action. In
fact, the assumption required for correct behaviour can
be weakened: it is only necessary that at most one con-
ditional effect is triggered to affect the state of each pa-
rameter of the operator. Even this condition is stronger
than is appropriate for some domains.
An example where the assumption is violated is the
following:
(:action op_with_non_exclusive_conditions
:parameters (?ob)
:precondition (a ?ob)
:effect (and (not (a ?ob)) (x ?ob)
(when (b ?ob) (and (not (b ?ob)) (y ?ob)))
(when (c ?ob) (and (not (c ?ob)) (z ?ob)))))
For an object with initial properties {a1, b1, c1}, this
operator should allow the state {x1, y1, z1}to be reach-
able.
Instead, we actually generate the rules (a1→x1),
(a1, b1→x1, y1) and a1, c1→x1, z1, since the opera-
tor leads to the creation of three pseudo-operators, one
with the pure simple effects of deleting aand asserting
xand the others each taking a precondition from their
respective conditional effect and the appropriate addi-
tional effect. With these rules it is not possible to reach
the state {x1, y1, z1}.
Failure to correctly generate all reachable states leads
to the generation of unsound invariants, so this ap-
proach cannot be used unless it is possible to guarantee
the necessary conditions for valid application.
Separating conditional effects The strong as-
sumption, that at most one of the conditional effects
of an operator will apply, can be replaced with a much
weaker assumption: that any number of the conditional
effects of an operator could be applicable. This as-
sumption can be characterised using the original TIM
machinery by creating pseudo-operators, one with the
complete collection of primary preconditions and effects
from the original operator and one for each conditional
effect, adding the condition for the effect to the pre-
conditions of the original and replacing the effect of
the original with the conditional effect. In the example
above, this would lead to the following transition rules:
a1→x1,b1→y1and c1→z1.
With these rules it is possible for extension (the pro-
cess by which TIM generates complete state spaces from
the initial state properties of objects) to generate all
the states we want, and more besides. For instance, we
could generate a state {a1, y1, c1}, by applying one of
the conditional effects without the corresponding pri-
mary effect.
The weakened assumption leads to correspondingly
weaker invariants, since the opportunity to apply rules
that do not correspond to actual operator applications
allows apparent access to states that are not, in fact,
reachable. More seriously, however, we have separated
the primary and secondary effects of the operator into
distinct transitions which can only be applied sequen-
tially. This does not fit with the intended meaning of
the operator, which is that all the preconditions (pri-
mary and secondary) are tested in the state before the
effects take place. In the previous example this does
not make any difference to the behaviour of the rules.
The circumstances under which it makes a difference
are:
•When any (primary or secondary) effect deletes a sec-
ondary precondition (for a different conditional ef-
fect). This is because sequentialising the rules will
cause the deleted effect to be unavailable for the ap-
plication of the rule with the secondary precondition
if the deleting rule is applied first. However, rules
can be applied in all possible orders so this leads to a
problem only when the second rule deletes a precon-
dition of the first rule, so that applying the rules in
either order prevents them from both being applied.
•When any (primary or secondary) effect deletes a
(primary or secondary) effect (where not both are
primary components or are in the same conditional
effect). In this case, as an operator, the classical se-
mantics (Lifschitz 1986) causes the add effects to oc-
cur after the delete effects and the apparently para-
doxical effects are resolved.
A simple example of the first case is an effect of the
form:
(when (and (q ?y) (p ?x))
(and (q ?x) (not (p ?x))))
(when (and (p ?y) (p ?x))
(and (r ?x) (not (p ?x))))
In both secondary effects, (p ?x) is deleted. If both
when conditions apply, the condition is deleted once.
The proposed compilation of the operators into transi-
tions will not deal with this correctly. The rules created
from these effects (supposing no primary pre- or post-
conditions affect them) will be of the form: p1→q1
and p1→r1. It looks as though a p1property must
given up to gain either q1or r1, but in fact both can be
purchased by giving up a single p1.
To handle this problem we can identify the common
deleted literal and collapse the rules to arrive at a third
rule: p1→q1, r1. Notice that we cannot replace the
other rules with this one, since the conditions cannot
be assumed to always apply together. More generally,
we cannot assume that two rules generated from the
same operator that have a common element on their
left-hand-sides will always be exchanging different in-
stances of the property (even if derived from different
variables – the variables could refer to the same ob-
ject). Therefore, such rules have to be combined to
collapse the exchanged property into a single instance.
(It is interesting to observe that the pathological case,
in which the rules derive from primary effects affecting
different variables, can lead to a similar problem in the
original TIM analysis). There is a minor problem in
that the process of collapse could, in principle, be ex-
ponentially expensive in the size of the operator, since
every combination of collections of rules sharing a left-
hand-side element must be used to generate a collapsed
rule (in which the shared collection is collapsed into a
single instance of each property). In practice the size
of these sets of rules is very small and the growth is
not a problem. A more significant problem is that the
combinations of these rules can lead to further weaken-
ing of the possible invariants through over-generation
of states.
An example of the second case is:
(:action A
:parameters (?x)
:precondition (p ?x)
:effect (and (not (p ?x)) (q ?x)
(when (q ?x) (and (not (q ?x)) (r ?x)))))
Notice that in order to delete an effect that is added
by the primary effect of an operator, or another con-
ditional effect, the deleted condition must be a precon-
dition of the effect. The overall behaviour of (A a)
applied to a state in which (p a) and (q a) hold is to
generate a state with (q a) and (r a). This is because
the delete effect is enacted before the add effect, so that
the net effect on (q a) is for it to be left unchanged.
Application of the operator to a state in which only (p
a) holds will yield the state in which only (q a) holds.
The rules generated from this operator using the pro-
posal of this section are p1→q1and p1⇒q1→r1.
Testing the enablers in application of these rules allows
a precise generation of states in both cases. However,
it is generally not possible to consider enabling condi-
tions without compromising correct behaviour. Sup-
pose that the additional primary precondition (s ?x)
and primary effect (not (s ?x)) are added to the pre-
vious operator. Then the rules will become: p1, s1→q1
and p1, s1⇒q1→q1. It is now impossible to apply the
rules sequentially to the property space state {p1, s1}
if we take enablers into account, because the first rule
will consume the s1property and prevent the second
rule from being applied. Consequently, this approach
cannot restrict rule application using enabling condi-
tions and we will therefore be forced to generate the
unwanted states, weakening the invariants.
Conditional transitions The most radical treat-
ment of conditional effects involves a significant ex-
tension of the TIM machinery in order to extend the
expressive power of the rules in parallel with the ex-
tended expressive power of the operators. This is a
less attractive option, since it requires new algorithmic
treatments, but this price must be offset against the im-
proved analysis and the more powerful invariants that
can be inferred from domain encodings.
The proposal is to extend the expressiveness of the
transition rules to include conditional transitions. The
conditional component of the transition rule is an addi-
tional transition denoted by the keyword if. Satisfac-
tion of the condition depends on the presence in a state
of both enablers and the start conditions.
a1→x1
if b1→y1
if c1→z1
Generating conditional transitions Firstly, we
must generalise the notion of a PRS into a nested
structure to represent operators including when condi-
tions. We include an extra field cnd to record an em-
bedded PRS for the conditional part of the operator.
op_with_non_exclusive_conditions then yields the
following PRS.
pre :a1
add :x1
del :a1
cnd :"pre :b1
add :y1
del :b1#
"pre :c1
add :z1
del :c1#
Now we use the generalised PRSs to generate con-
ditional transition rules. PRS analysis for secondary
conditions is essentially the same as the analysis for pri-
mary conditions, except that only the adds and deletes
of the secondary rule are considered, but all the precon-
ditions of the containing structures must be included.
In general this construction is straightforward, but
there is an important case that presents a minor com-
plication. If a conditional effect deletes a primary pre-
condition then the primary precondition will be seen
as enabling the outer rule, but it will not appear as a
precondition for the conditional effect. One solution is
to simply handle the proposition as if it were a pre-
condition of the conditional effect, so that the prop-
erty appears as a start condition for the conditional
rule. However, this leaves the enabling condition out-
side, apparently required as an additional property for
the application of the conditional transition rule. This
presents the problem that if enablers are used in deter-
mining applicability of rules it will not be clear whether
the rule demands one or two copies of the property to be
applied. One way to solve this problem is to promote
the precondition, so that deleted conditions in condi-
tional effects are always explicit preconditions of the
conditional effects. To achieve this, a new conditional
effect must be added to the original operator, with the
deleted literals as its preconditions and all of the origi-
nal primary effects as its effects. The deleted literals are
now removed from the primary preconditions and added
explicitly as preconditions to all the other conditional
effects. This transformation yields an operator which
is equivalent in its effects on a state to the original,
although it can, in principle, be applied to a larger col-
lection of states (all states in which the deleted effects
are not true – the effect of application is null). Analysis
of this new operator yields a conditional rule with the
correct structure, distinguishing the case where a prop-
erty is genuinely an enabling condition from the case
where it is actually a copy of the deleted condition in a
secondary effect.
Uniting transition rules In TIM the process of
uniting is that of combining the collections of properties
into a partition such that each transition rule refers, in
its start and end components, only to properties in a
single set within the partition. This ensures that the
construction of extensions of states in property spaces
works within a closed subset of the properties used in
the domain and that the initial state properties are
properly divided between the property spaces to seed
the extension process.
In the case of conditional rules, the properties which
change between the start and end of each conditional
component of each rule must be united into the same
subset of the partition, and properties within different
conditional components of the same rule are also com-
bined.
uniters((Ens ⇒Start →E nd) + Subrules) =
(Start −End)∪(End −Start)∪
S
Sr∈S ubrules
uniters(Sr)
This form of uniting ensures that the property spaces
remain as small as possible, which improves the qual-
ity of the invariants that can be generated and also the
efficiency of the analysis. It does, however, impose ad-
ditional difficulties in the use of rules, since the same
rule can now affect the behaviour of objects in multiple
spaces (conditional elements might refer to properties
in entirely different spaces to the primary effects of the
rule). Each rule must be added to every space that it
applies to and considered during the extension of each
of those spaces separately.
A further change from the process of setting the ini-
tial collection of property spaces in STRIPS TIM is that
conditional rules can appear to contain attribute rules
when, in fact, they are half of a transition rule that is
completed by a second “attribute” rule in a conditional
effect. For example:
(:action B
:parameters (?x)
:precondition (p ?x)
:effect (and (not (p ?x))
(when (s ?x) (q ?x))
(when (not (s ?x)) (r ?x))))
This operator leads to the rule:
p1→null
if s1⇒null →q1
if ¬s1⇒null →r1
which might suggest that p1,q1and r1should all be con-
sidered to be attributes and, consequently, have no use-
ful invariant behaviours. However, it would be better
to observe that the behaviour of these properties is ac-
tually equivalent to a pair of transitions: s1⇒p1→q1
and ¬s1⇒p1→r1(exploiting negative preconditions,
discussed below). Although it might be possible to con-
vert the rules automatically, it becomes much harder
to do this in the context of multiple rules referring to
other parameters. It is actually easier to manage the
rules during extension. The important thing, during
initial property space construction, is to avoid labelling
properties as attributes on the basis of the structure of
conditional rules. A decision about which properties to
label as attributes must be postponed to the extension
phase.
The fact that the properties in a property space can
be distributed between primary and conditional rules
creates a complication for uniting: it is not enough
to put together properties in the same rule. Proper-
ties must be combined when they appear in “attribute”
rules such as the previous example. It will be noted,
however, that the example relies on the form of the
conditions of the conditional effects and this feature is
further discussed later in the paper.
Extending the state spaces with the conditional
transitions Extension is the stage most impacted by
the introduction of conditional rules. Conditional rules
must be applied to each state in the property space
containing them in order to generate a set of reachable
states. Thus, the key to exploiting these rules is to un-
derstand how to apply the conditional rules to a state.
When a standard transition rule is applied the start
conditions are removed from the state and the end con-
ditions added to the result to yield the (single) new
state. Conditional rules are applied by removing the
primary start properties and then, for each conditional
rule, continuing expansion under the assumption that
the condition applies and under the assumption that it
does not apply. Therefore, there will be 2nnew states
generated for nconditional effects (subject to repeti-
tion of previously visited states). Although this is po-
tentially exponentially expensive, nis typically a very
small value, so that the cost is not a problem. Never-
theless, conditional effects represent a potential source
of considerable cost in the TIM analysis (just as they
can in planning itself). The hope is that we will have
saved significant cost by deferring this combinatorial
aspect to the latest possible time, and some redundant
processing has been avoided.
There are, in fact, several special cases that can be
used to reduce the number of combinations of condi-
tional rule elements that must be considered. Condi-
tional rules can only be applied if the start properties
are present in the state to which they are applied. Con-
ditions that are restricted to propositions concerning
only the variable that is affected by the transition can
usually be restricted by the enabling or start conditions
of the rule. Further, the observations of the next section
provide for an important collection of situations.
Having allowed possible attribute conditional rules
to be entered into property spaces it is extremely
important that the extension process monitors the
possible existence of increasing attributes in a property
space. This possibility also exists in the original TIM
analysis and is handled by checking to see whether
newly generated states are proper super-states of
previously generated states. In such cases, if there
is path of transition rules leading from the sub-state
to the super-state, the difference between the states
represents a collection of attribute properties and they
must be stripped from the property space and its rules
before extension can be continued. This process might
iterate several times before the space settles on a fixed
collection of properties. If this collection is actually
empty then the properties will, in fact, all be attributes
due to the effects of the conditional rules in the space.
Hidden exclusivity In some operators, conditional
effects are mutually exclusive because it is not possi-
ble for their preconditions to hold simultaneously. In
such cases the extension process described above will
generate permissive property spaces and weaker invari-
ants. In order to improve the generation process it is
necessary to avoid allowing rules to be applied simulta-
neously when their conditions will prevent it. Further
improvement can be made by observing that in many
cases the conditional effects are created to ensure that
precisely one effect of a collection will be triggered. For
example:
(:action op1
:parameters (?y)
:precondition (a ?y)
:effect (and
(not (a ?y))
(b ?y)))
(:action op2
:parameters (?y)
:precondition (b ?y)
:effect (and
(not (b ?y))
(a ?y)))
(:action op3
:parameters (?x ?y)
:precondition (p ?x ?y)
:effect (and
(when (a ?y) (q ?x))
(when (b ?y) (r ?x))
(not (p ?x ?y)) ))
In this example, properties a1and b1may only be
exchanged for each other, as illustrated in Figure 1. If,
in the initial state, objects only have at most one of the
two properties, then the two when conditions in op3 are
mutually exclusive.
a1 b1
op1
op2
Figure 1: Property space and transitions for properties
a1and b1.
The generated transitions would be: a1→b1,b1→a1,
p1→null
if null →q1
if null →r1
and p2→null. Failure to observe that the conditional
effects are governed by preconditions of which precisely
one must be true will mean that no invariants can be
generated using the properties p1,q1and r1. In order
to discover these we need two pieces of information to
be available during extension: the fact that each condi-
tional effect is governed by a property that, in this case,
apply to another parameter and form part of another
property space and the fact that the properties govern-
ing these effects are the two alternative states in that
space. We therefore mark conditional rules with all of
the properties that govern their application. Enabling
conditions that are properties of other parameters are
called aliens. We enclose them in square brackets to dis-
tinguish them from enablers applying to the parameter
governed by the transition rule.
p1→null
if [a1]⇒null →q1
if [b1]⇒null →r1
This provides the first piece of information. The second
piece is derived from an analysis of the property space
containing properties a1and b1. It is important that the
analysis makes the latter space available before the use
of the rule that depends upon it. Therefore, we perform
a dependency analysis to order the property spaces for
appropriate expansion order. Where one state space
depends on another, an order can be imposed between
them that would allow information from the first to be
used in the second. Space zdepends on space yif tran-
sitions belonging to space zhave enabling conditions
which belong to space y.
Circular dependency amongst state spaces must be
handled carefully. One way is to break the cycle arbi-
trarily and then follow the dependencies that remain.
The first space in the chain will then be expanded with
the more conservative assumption that no invariants
affect the conditions governing the application of con-
ditional effects, possible leading to weakened invariants.
Because these weakened invariants can propagate their
impact up the chain, it would obviously be best to break
the chain in such a way as to minimize the impact that
these weakened invariants might have. An alternative
solution to the problem of cyclic dependency is to carry
out extension on these property spaces in an interleaved
computation, leading to a fixed-point. The approach is
to apply conditional rules relative to the dependencies
on property spaces using the states that have been gen-
erated so far. The extension process must then iterate
around the cycle of interdependent spaces, adding new
states as new enabling states are added to the spaces on
which later spaces depend. This iterative computation
will have to be restarted if any of the properties in a
space is identified as an attribute, as described previ-
ously.
Managing combinations of subrules In this sec-
tion we give more detail about how to use informa-
tion from spaces for which the extension process is al-
ready completed, together with annotation of the sub-
rules with alien enablers, to restrict the combinations
in which subrules of a conditional rule can fire.
For a subrule to fire, we require that for each variable-
space combination occurring as an alien enabler in the
subrule, there is at least one state that satisfies the
enabling property.
The coupling between subrules is captured because
valid combinations of subrule firings are those in which
the enabling conditions can be simultaneously satisfied.
Enabling conditions are satisfied, if, for each variable,
and each space in which it occurs, there is a non-empty
set of satisfying states.
The approach adopted depends on amending the
TIM algorithm with the following steps
1. During rule construction, each subrule is anno-
tated with alien enablers, each consisting of a triple
aliens(subrule) is a set of hpr operty, space, vari,
where var is the variable from which the enabler was
generated in the planning operator.
2. A graph of dependencies between spaces is con-
structed from the alien enablers.
3. Spaces are constructed in order, according to topo-
logical sort of the dependency graph. Cycles must be
broken as discussed above.
Now we define the valid combinations of rule firings
by describing a set of variables, and constraints which
must hold between them.
For each variable-space combination in alien enablers,
we have a variable state(variable, space) whose domain
ranges over possible states of variable in space.
state(var, sp)∈states(sp)
For each property-variable-space combination in
alien enablers, we have a boolean constraint vari-
able, sat(property , variable, space), whose value indi-
cates whether the property is satisfied.
sat(prop, var, sp)∈ {0,1}
For each subrule, we have a variable f(subrule),
whose domain is {0,1}, which records the whether or
not the subrule can fire. Note that since the possible
values remaining in the domain are attached to the vari-
able, we can describe never {0}, sometimes {0,1}, and
always {1}.
Now we attach constraints between these variables as
follows:
Between the sat(property, var iable, space) variables,
and the state(variable, space), we require that the
property is satisfied iff var is in a compatible state in
space.
∀subrule ∈subrules
∀hprop, var, spi ∈ aliens(subrule).
sat(prop, var, sp)⇔has prop(state(v ar, sp), prop)
Between the f(subrule) variables and the
sat(property , variable, space) variables of its alien
enablers, we require that subrule fires iff the conjunc-
tion of its alien enablers is satisfied.
f(subrule)⇔Vsat(prop, var, sp)
hprop,var,spi∈aliens(subrule)
These constraints are used to determine which com-
binations of subrules may fire together.
Universally Quantified Effects
Since we are concerned with the transitions made by
individual objects, it is not necessary to expand the op-
erator in advance with every instantiation of the quan-
tified variable, as is done by (Gazen & Knoblock 1997).
(:action one_forall
:parameters (?a - t1)
:precondition (p ?a)
:effect (and (not (p ?a))
(q ?a)
(forall (?b - t2)
(when (r ?a ?b)
(and (not (r ?a ?b)) (s ?a ?b) )))))
The effect inside the quantifier may occur as many
times as there are instantiations for the quantified vari-
able ?b.
For ?b itself, we can generate a transition rule exactly
as if it was an ordinary parameter of the operator. The
number of objects making the transition is not relevant,
as any object belonging to this state space experiences
only a single transition. In the above example, analysis
for the variable ?b gives the transition r2→s2.
Now consider the transitions for the variable ?a. Out-
side the quantifier, ?a undergoes a single transition
p1→q1. Inside the quantifier, ?a undergoes a tran-
sition r1→s1, but the number of times the transition
may occur depends on the number of instantiations the
quantified variable can take. This reasoning applies to
any variable occurring inside the scope of the quantifier,
which occurs in an effect together with the quantified
variable.
We must again resort to a new notation to describe
the resulting transition rules. We use ∗to indicate that
the transition inside the quantifier may be performed
an unknown number of times.
For the variable ?a, we have:
p1→q1
(r1→s1)∗
In the extension process, the interpretation of the
starred rule is that the inner transition may occur any
number of times.
Observe that in this example, the transition inside
the quantifier is an exchange of properties. It may also
occur that the transition inside the quantifier involves
properties simply being gained or lost. Such properties
are considered to be attributes in the TIM system, and
are processed in a separate way. The presence of such
rules otherwise leads to non-termination of the exten-
sion process, as attributes may be added without limit.
Attribute rules are less useful for discovering invariants,
and it is unfortunate if properties are considered as at-
tributes unnecessarily. This is harmful to the TIM anal-
ysis because any property which becomes an attribute
also makes anything for which it can be exchanged into
an attribute.
It is our contention that properties will not be made
into attributes unnecessarily as a result of analysing
the quantified effect. Where properties are exchanged,
the whole exchange will normally take place inside the
quantifier, as in the example above.
An example where a quantified effect correctly gives
rise to attribute rules can be seen in the briefcase move
operator given above. There the at2properties are at-
tributes — the portables at a location may be gained
and lost without exchange.
An awkward example can be found in the
Schedule domain, in which predicates of the form
(painted ?object ?paint) are typical. All operators
in the domain which mention painted include a dele-
tion which is universally quantified over possible paint
colours, as in the example operator shown below. Some
of the operators also add a single painted effect. Hence
all operators which touch an object’s painted1property
result in a state with either 0 or 1 instances of the prop-
erty for that object. If this holds also in the initial state,
it is an invariant.
(:action do-immersion-paint
:parameters (?x ?newpaint)
:precondition (and
(part ?x)
(has-paint immersion-painter ?newpaint)
(not (busy immersion-painter))
(not (scheduled ?x)))
:effect (and
(busy immersion-painter)
(scheduled ?x)
(painted ?x ?newpaint)
(when (not (objscheduled))
(objscheduled))
(forall (?oldpaint)
(when (painted ?x ?oldpaint)
(not (painted ?x ?oldpaint))))))
It is problematic that the quantified effect makes it
appear that painted1is a decreasing attribute. In ear-
lier versions of TIM, the appearance of decreasing at-
tribute transitions always led to the creation of a sepa-
rate attribute space. However, if the attribute may only
decrease, it can lead to only a finite number of states, so
it is safe to have rules of this kind in a property space.
It is important to take this approach, as invariants will
otherwise be lost.
Negative preconditions
Our treatment of negative preconditions is currently the
most restrictive described here. We use a similar tech-
nique to (Gazen & Knoblock 1997): for each predicate
that may appear as a negative precondition, we create a
new predicate to represent the negative version of that
precondition. This predicate must exactly complement
the positive use of the predicate and it replaces, in pre-
conditions, the use of the negative literal.
For those predicates which occur anywhere as a neg-
ative precondition, we must do the following to the ef-
fects of every operator:
•Wherever the positive version of the predicate ap-
pears as a delete effect, the negative version must
appear as an add-effect.
•Wherever the positive version of the predicate ap-
pears as an add effect, the negative version must ap-
pear as a deleted effect.
We complete the initial state of the problem descrip-
tion as follows: For each predicate and each object that
may instantiate the predicate, if there is no positively-
occurring fact, we add the negative version of the fact.
In the case of negatively-occurring predicates with
multiple arguments, the completion of initial state is
very expensive, and would, in any case, lead to a much
weaker domain analysis. For our processing we impose
the restriction that only predicates of a single argument
are handled. This also allows the transformation to be
performed, not on the operator itself, but to be pro-
cessed at the level of PRSs.
We believe that predicates with more than a single
argument could be handled efficiently using a represen-
tation in which properties which properties are counted,
but this remains an area of further investigation.
Results
A prototype system has been implemented which suc-
cessfully produces the expected invariants for all the
cases considered in this paper, except those relying on
a full treatment of universal quantifiers. Work on the
handling of universal quantifiers is currently in progress.
Additional computational overheads for handling the
conditional effects are negligible.
In the example of hidden exclusivity between condi-
tional transitions, the system successfully detects that
exactly one of the subrules may fire, and thus extension
does not over-generate states or unnecessarily weaken
p1
q1
op3
r1
op3
Figure 2: Property space and transitions for properties
p1,q1and r1.
the analysis by creating an attribute space. The result-
ing space for the properties p1,q1and r1is shown in
Fig. 2.
Directly from the state space, we get that states
in this space allow exactly one of the properties
{p1, q1, r1}, and from this the following invariants are
generated:
FORALL x:T1. (Exists y1:T0. p(x,y1) OR r(x) OR q(x))
FORALL x:T1. NOT (Exists y1:T0. p(x,y1) AND r(x))
FORALL x:T1. NOT (Exists y1:T0. p(x,y1) AND q(x))
FORALL x:T1. NOT (r(x) AND q(x))
Related work
Both the system of Rintanen (Rintanen 2000) and
Gerevini and Schubert’s DISCOPLAN (Gerevini &
Schubert 1998) rely on generating an initial set of can-
didate domain invariants. These are then tested against
the operators to see if they hold after the application of
the operator. Candidate invariants which are not pre-
served are either discarded or strengthened by adding
extra conditions and tested further. In both systems,
discovered invariants can be fed back, to help discover
further conditions.
The current version of DISCOPLAN (Gerevini &
Schubert 2000) has been extended to handle conditional
effects, types and negative preconditions, but does not
handle universal quantification.
An interesting question is whether DISCOPLAN can
detect and exploit the occurrence of mutually exclu-
sive secondary conditions. Our tests with the current
version of DISCOPLAN indicate that it cannot. The
account in (Gerevini & Schubert 2000) discusses the
feeding back of discovered invariants into the process,
by using this information to expand operator defini-
tions. It appears that currently, only the implicative
constraints are fed back in this way. To correctly re-
solve this problem would require XOR constraints to be
fed back into the processing of secondary conditions.
Conclusion
In this paper we have explored the extension of TIM to
handle a subset of ADL features. The features we have
not considered are those that allow fuller expressiveness
in the expression of preconditions: quantified variables
and arbitrary logical connectives. There are significant
difficulties in attempting to handle these features, since
the opportunity to identify the specific properties asso-
ciated with particular objects is obscured by the pos-
sibility for disjunctive preconditions combining proper-
ties of different objects, universally quantified variables
that allow reference to arbitrary objects, and nested
expressions that can involve exponential expansion if
conversion to disjunctive normal form is used. We have
also not considered a full treatment of equality and in-
equality propositions, nor of arbitrary negated literals.
The features that have been considered, and for
which there are extensions that allow TIM to extract in-
variants, include conditional effects, quantified effects,
types and a restricted form of negative preconditions.
These features form the core of those used in existing
ADL domain encodings, with the exception of the ADL
Miconics-10 domain used in the 2nd International Plan-
ning Competition in 2000. The most difficult feature
to handle is the use of conditional effects and we have
shown that there are a variety of possible approaches,
each with advantages and disadvantages. The most
powerful approach is the extension of the underlying
TIM machinery, rather than an attempt to preprocess
conditional effects out of the operators in order to reuse
the STRIPS TIM machinery. This extended machinery
complicates the entire sequence of analysis phases con-
ducted by TIM and we have described the effects that
are implied for each stage in turn.
The next stages of this work include completion of
a full implementation of all of the features described
in this paper, a further exploration of the treatment of
negative preconditions and experimentation with the
system on existing ADL domains.
Acknowledgements
The work was funded by EPSRC grant no. R090459.
References
Bacchus, F., and Kabanza, F. 2000. Using temporal
logics to express search control knowledge for plan-
ning. Artificial Intelligence 116.
Doherty, P., and Kvarnstrom, J. 1999. TALplanner:
An empirical investigation of a temporal logic-based
forward chaining planner. In Proceedings of the 6th
Int’l Workshop on the Temporal Representation and
Reasoning, Orlando, Fl. (TIME’99).
Fox, M., and Long, D. 1998. The automatic infer-
ence of state invariants in TIM. Journal of Artificial
Intelligence Research 9:367–421.
Fox, M., and Long, D. 2000. Utilizing automat-
ically inferred invariants in graph construction and
search. In International AI Planning Systems confer-
ence, AIPS 2000, Breckenridge, Colorado, USA.
Gazen, B. C., and Knoblock, C. A. 1997. Combin-
ing the expressivity of UCPOP with the efficiency of
graphplan. In ECP, 221–233.
Gerevini, A., and Schubert, L. K. 1998. Inferring
state constraints for domain-independent planning. In
AAAI/IAAI, 905–912.
Gerevini, A., and Schubert, L. 2000. Extending the
types of state constraints discovered by DISCOPLAN.
In Proceedings of the Workshop at AIPS on Analyzing
and Exploiting Domain Knowledge for Efficient Plan-
ning, 2000.
Kautz, H., and Selman, B. 1998. The role of domain-
specific knowledge in the planning-as-satisfiability
framework. In Proceedings of the 4th International
Conference on AI Planning Systems.
Lifschitz, E. 1986. On the semantics of STRIPS. In
Proceedings of 1986 Workshop: Reasoning about Ac-
tions and Plans.
Long, D., and Fox, M. 2000. Recognizing and ex-
ploiting generic types in planning domains. In Inter-
national AI Planning Systems conference, AIPS 2000,
Breckenridge, Colorado, USA.
Pednault, E. 1989. ADL: Exploring the middle ground
between strips and the situation calculus.
Porteous, J.; Sebastia, L.; and Hoffmann, J. 2001. On
the extraction, ordering, and usage of landmarks in
planning. In Proceedings of European Conference on
Planning.
Rintanen, J. 2000. An iterative algorithm for synthe-
sizing invariants. In AAAI/IAAI, 806–811.
van Beek, P., and Chen, X. 1999. Cplan: A constraint
programming approach to planning. In Proceedings
of the 16th National Conference on Artificial Intelli-
gence, Orlando, Florida.