Making AC-3 an Optimal Algorithm.
ABSTRACT The AC-3 algorithm is a basic and widely used arc consistency enforcing algorithm in Constraint Sat- isfaction Problems (CSP). Its strength lies in that it is simple, empirically efficient and extensible. However its worst case time complexity was not considered optimal since the first complexity result for AC-3 (Mackworth and Freuder, 1985) with the bound (ed3), where e is the number of constraints and d the size of the largest domain. In this paper, we show suprisingly that AC-3 achieves the opti- mal worst case time complexity with (ed2). The result is applied to obtain a path consistency algo- rithm which has the same time and space complex- ity as the best known theoretical results. Our exper- imental results show that the new approach to AC-3 is comparable to the traditional AC-3 implementa- tion for simpler problems where AC-3 is more effi- cient than other algorithms and significantly faster on hard instances. simplicity of arc revision in AC-3 makes it convenient for implementation and amenable to various extensions for many constraint systems. Thus while AC-3 is considered as being sub-optimal, it often is the algorithm of choice and can out- perform other theoretically optimal algorithms. In this paper, we show that AC-3 achieves worst case op- timal time complexity of (ed2). This result is surprising since AC-3 being a coarse grained "arc revision" algorithm (Mackworth, 1977), is considered to be non-optimal. The known results for optimal algorithms are all on fine grained "value revision" algorithms. Preliminary experiments show that the new AC-3 is comparable to the traditional implemen- tations on easy CSP instances where AC-3 is known to be substantially better than the optimal fine grained algorithms. In the hard problem instances such as those from the phase transition, the new AC-3 is significantly better and is compa- rable to the best known algorithms such as AC-6. We also show that the results for AC-3 can be applied immediately to obtain a path consistency algorithm which has the same time and space complexity as the best known theoretical results. 1
- [Show abstract] [Hide abstract]
ABSTRACT: For the enterprises organised in several distributed production sites, usually, production scheduling models presume either an instantaneous delivery of products or an unlimited number of available vehicles for transporting products. However, the transportation of the intermediate products to the sites is an important activity within the whole process of manufacturing, and the efficient coordination of production and transportation becomes a challenging problem in the actual higher collaborative and competitive environments. This work focuses on the integrated production and transportation scheduling properly managing the resources capacity, material flows and temporal interdependencies between sites. A case-study is reported and the industrial problem under consideration is modelled as a constraint satisfaction problem (CSP). Besides scheduling under resource constraints, the model presented in this paper expands the packing problem to the area of transportation operations scheduling. It is implemented under the constraint programming language CHIP V5. The provided solutions determine values for the various variables associated to the production and transportation operations realised on the whole multi-site, as well as the curves with the profile of the total consumption of resources in time.International Journal of Computer Integrated Manufacturing 01/2012; · 0.94 Impact Factor - SourceAvailable from: archives-ouvertes.fr[Show abstract] [Hide abstract]
ABSTRACT: In this paper, we propose two original and efficient approaches for enforcing singleton arc consistency. In the first one, the data structures used to enforce arc consistency are shared between all subproblems where a domain is reduced to a singleton. This new algorithm is not optimal but it requires far less space and is often more efficient in practice than the optimal algorithm SAC-Opt. In the second approach, we perform several runs of a greedy search (where at each step, arc consistency is maintained), possibly detecting the singleton arc consistency of several values in one run. It is an original illustration of applying inference (i.e., establishing singleton arc consistency) by search. Using a greedy search allows benefiting from the incrementality of arc consistency, learning relevant information from conflicts and, potentially finding solution(s) during the inference process. We present extensive experiments that show the benefit of our two approaches.Constraints 01/2011; · 0.74 Impact Factor - SourceAvailable from: export.arxiv.org
Article: Layered Fixed Point Logic
[Show abstract] [Hide abstract]
ABSTRACT: We present a logic for the specification of static analysis problems that goes beyond the logics traditionally used. Its most prominent feature is the direct support for both inductive computations of behaviors as well as co-inductive specifications of properties. Two main theoretical contributions are a Moore Family result and a parametrized worst case time complexity result. We show that the logic and the associated solver can be used for rapid prototyping and illustrate a wide variety of applications within Static Analysis, Constraint Satisfaction Problems and Model Checking. In all cases the complexity result specializes to the worst case time complexity of the classical methods.04/2012;
Page 1
Making AC-3 an Optimal Algorithm
Yuanlin Zhang and Roland H.C. Yap
School of Computing, National University of Singapore
Lower Kent Ridge Road, 119260, Singapore
{zhangyl, ryap}@comp.nus.edu.sg
Abstract
The AC-3 algorithm is a basic and widely used arc
consistency enforcing algorithm in Constraint Sat-
isfaction Problems (CSP). Its strength lies in that
it is simple, empirically efficient and extensible.
However its worst case time complexity was not
considered optimal since the first complexity result
for AC-3 [Mackworth and Freuder, 1985] with the
boundO(ed3), whereeisthenumberofconstraints
and d the size of the largest domain. In this paper,
we show suprisingly that AC-3 achieves the opti-
mal worst case time complexity with O(ed2). The
result is applied to obtain a path consistency algo-
rithm which has the same time and space complex-
ity as the best known theoretical results. Our exper-
imental results show that the new approach to AC-3
is comparable to the traditional AC-3 implementa-
tion for simpler problems where AC-3 is more effi-
cient than other algorithms and significantly faster
on hard instances.
1
Arc consistency is a basic technique for solving Constraint
Satisfaction Problems (CSP) and variations of arc consis-
tency are used in many AI and constraint applications. There
have been many algorithms developed for arc consistency
such as AC-3 [Mackworth, 1977], AC-4 [Mohr and Hender-
son, 1986], AC-6[Bessiere, 1994]and AC-7[Bessiere et al.,
1999]. The AC-3 algorithm was proposed in 1977 [Mack-
worth, 1977]. The first worst case analysis of AC-3 [Mack-
worth and Freuder, 1985] gives a complexity of O(ed3),
where e is the number of constraints and d the size of largest
domain. This result is deeply rooted in the CSP literature (eg.
[Wallace, 1993; Bessiere et al., 1999]) and thus AC-3 is typi-
cally considered to be non-optimal. Other algorithms such as
AC-4, AC-6, AC-7 are considered theoretically optimal, with
time complexity O(ed2). As far as we are aware, there has
not been any effort to improve the theoretical bound of AC-3
to be optimal. Here, we re-examine AC-3 for a number of
reasons. Firstly, AC-3 is one of the simplest AC algorithms
and is known to be practically efficient [Wallace, 1993]. The
simplicity of arc revision in AC-3 makes it convenient for
implementation and amenable to various extensions for many
Introduction
constraint systems. Thus while AC-3 is considered as being
sub-optimal, it often is the algorithm of choice and can out-
perform other theoretically optimal algorithms.
In this paper, we show that AC-3 achieves worst case op-
timal time complexity of O(ed2). This result is surprising
since AC-3 being a coarse grained “arc revision” algorithm
[Mackworth, 1977], is considered to be non-optimal. The
known results for optimal algorithms are all on fine grained
“value revision” algorithms. Preliminary experiments show
that the new AC-3 is comparable to the traditional implemen-
tations on easy CSP instances where AC-3 is known to be
substantially better than the optimal fine grained algorithms.
In the hard problem instances such as those from the phase
transition, the new AC-3 is significantly better and is compa-
rable to the best known algorithms such as AC-6. We also
show that the results for AC-3 can be applied immediately to
obtain a path consistency algorithm which has the same time
and space complexity as the best known theoretical results.1
2
In this section we give some background material and no-
tation used herein. The definitions for general CSP follow
[Montanari, 1974; Mackworth, 1977].
Definition 1 A Constraint Satisfaction Problem (N, D, C)
consists of a finite set of variables N = {1,···,n}, a set
of domains D = {D1,···,Dn}, where i ∈ Di, and a set
of constraints C = {cij| i,j ∈ N}, where each constraint
cijis a binary relation between variables i and j. For the
problem of interest here, we require that ∀x,y x ∈ Di,y ∈
Dj,(x,y) ∈ cijif and only if (y,x) ∈ cji.
For simplicity, in the above definition we consider only bi-
naryconstraints,omittingtheunaryconstraintonanyvariable
[Mackworth, 1977]. Without loss of generality we assume
there is only one constraint between each pair of variables.
Definition 2 The constraint graph of a CSP (N, D, C) is the
graph G = (V,E) where V = N and E = {(i,j) | ∃cij ∈
C or ∃cji∈ C}.
The arcs in CSP refer to the directed edges in G. Throughout
this paper, n denotes the number of variables, d the size of
the largest domain, and e the number of constraints in C.
Background
1A related paper by Bessiere and Regin appears in this proceed-
ings.
Page 2
Definition 3 Given a CSP (N,D,C), an arc (i,j) of its con-
straint graph is arc consistent if and only if ∀x ∈ Di, there
exists y ∈ Djsuch that cij(x,y) holds. A CSP (N,D,C) is
arc consistent if and only if each arc in its constraint graph is
arc consistent.
The AC-3 algorithm for enforcing arc consistency on a
CSP is given in figure 2. The presentation follows [Mack-
worth, 1977; Mackworth and Freuder, 1985] with a slight
change in notation and node consistency removed.
procedure REVISE((i,j))
begin
DELETE ← false
for each x ∈ Dido
1.
if there is no y ∈ Djsuch that cij(x,y) then
delete x from Di;
DELETE ← true
endif
return DELETE
end
Figure 1: procedure REVISE for AC-3
algorithm AC-3
begin
1.
Q ← {(i,j) | cij∈ C or cji∈ C,i ?= j}
while Q not empty do
select and delete any arc (k,m) from Q;
2.
if REVISE((k,m)) then
3.
Q ← Q ∪ {(i,k) | (i,k) ∈ C,i ?= k,i ?= m}
endwhile
end
Figure 2: The AC-3 algorithm
The task of REVISE((i,j)) in Fig 1 is to remove those in-
valid values not related to any other value with respect to arc
(i,j). We will show in section 3 that different implementa-
tions of line 1 lead to different worst case complexities. As
such, we argue that it is more useful to think of AC-3 as a
framework rather than a specific algorithm. In AC-3, a CSP
is modeled by its constraint graph G, and what AC-3 does is
to revise all arcs (∗,i) = {(k,i)|(k,i) ∈ E(G)} (line 3 in
Fig 2) except some special arc if the domain of variable i is
modified by REVISE((i,j)). A queue Q is used to hold all
arcs that need to be revised.The traditional understanding of
AC-3 is given by the following theorem whose proof from
[Mackworth and Freuder, 1985]is modified in order to facil-
itate the presentation in Section 3.
Theorem 1 [Mackworth and Freuder, 1985] Given a CSP
(N, D, C), the time complexity of AC-3 is O(ed3).
Proof Consider the times of revision of each arc (i,j).
(i,j) is revised if and only if it enters Q. The observation
is that arc (i,j) enters Q if and only if some value of j is
deleted (line 2–3 in fig 2). So, arc (i,j) enters Q at most
d times and thus is revised d times. Given that the number
of arcs are 2e, REVISE(i,j) is executed O(ed) times. The
complexity of REVISE((i,j)) in Fig 1 is at most d2.
The reader is referred to [Mackworth, 1977; Mackworth
and Freuder, 1985]for more details and motivations concern-
ing arc consistency.
3
The traditional view of AC-3 with the worst case time com-
plexity of O(ed3) (described by theorem 1) is based on a
naive implementation of line 1 in Fig 1 that y is always
searched from scratch. Hereafter, for ease of presentation, we
call the classical implementation AC-3.0. The new approach
to AC-3 in this paper, called AC-3.1, is based on the observa-
tion that y in line 1 of Fig 1 needn’t be searched from scratch
even though the same arc (i,j) may enter Q many times, The
search is simply resumed from the point where it stopped in
the previous revision of (i,j). This idea is implemented by
procedure EXISTy((i,x),j) in Fig 3.
Assume without loss of generality that each domain Diis
associated with a total ordering. ResumePoint((i,x),j) re-
members the first value y ∈ Dj such that cij(x,y) holds
in the previous revision of (i,j). The succ(y,D0
where D0
forcing, returns the successor of y in the ordering of D0
NIL, if no such element exists. NIL is a value not belong-
ing to any domain and precedes all values in any domain.
A New View of AC-3
j) function,
jdenotes the domain of j before arc consistency en-
jor
procedure EXISTy((i,x),j)
begin
y ← ResumePoint((i,x),j);
1:
if y ∈ Djthen % y is still in the domain
return true;
else
2:
while ((y ← succ(y,D0
if y ∈ Djand cij(x,y) then
ResumePoint((i,x),j) ← y;
return true
endif;
return false
endif
end
j) and (y ?= NIL))
Figure 3: Procedure for searching y in REVISE(i,j)
Theorem 2 The worst case time complexity of AC-3 can be
achieved in O(ed2).
Proof Here it is helpful to regard the execution of AC-3.1
on a CSP instance as a sequence of calls to EXISTy((i,x),j).
Consider the time spent on x ∈ Diwith respect to (i,j).
As in theorem 1, an arc (i,j) enters Q at most d times. So,
with respect to (i,j), any value x ∈ Diwill be passed to
EXISTy((i,x),j) at most d times . Let the complexity of
each execution of EXISTy((i,x),j) be tl(1 ≤ l ≤ d). tl
can be considered as 1 if y ∈ Dj(see line 1 in fig 3) and
otherwise it is slwhich is simply the number of elements in
Djskipped before next y is found (the while loop in line 2).
Furthermore, the total time spent on x ∈ Diwith respect to
Page 3
(i,j) is?d
Observe that in EXISTy((i,x),j) the while loop (line 2) will
skip an element in Djat most once with respect to x ∈ Di.
Therefore,?d
(i,j), we have to check at most d values in Diand thus at
most O(d2) time will be spent on checking arc (i,j). Thus,
the complexity of the new implementation of AC-3 is O(ed2)
because the number of arcs in constraint graph of the CSP is
2e.
The space complexity of AC-3.1 is not as good as the tra-
ditional implementation of AC-3. AC-3.1 needs additional
space to remember the resumption point of any value with re-
spect to any related constraint. It can be shown that the extra
space required is O(ed), which is the same as AC-6.
The same idea behind AC-3.1 applies to path consistency
enforcing algorithms. If one pair (x,y) ∈ ckjis removed, we
need to recheck all pairs (x,∗) ∈ cijwith respect to ckj◦ cik
(the composition of cikand ckj), and (∗,y) ∈ clkwith re-
spect to cjk◦ clj. The resumption point z ∈ Dkis remem-
bered for any pair (x,y) of any constraint cijwith respect to
any intermediate variable k such that cik(x,z),ckj(z,y) both
hold. ResumePoint((i,x),(j,y),k) is employed to achieve
the above idea in the algorithm in fig 4 which is partially mo-
tivated by the algorithm in[Chmeiss and Jegou, 1996].
1tl ≤
?d
11 +?d
1slwhere sl = 0 if tl = 1.
1sl≤ d. This gives,?d
1tl≤ 2d. For each arc
algorithm PC
begin
INITIALIZE(Q);
while Q not empty do
Select and delete any ((i,x),j) from Q;
REVISE PC((i,x),j,Q))
endwhile
end
procedure INITIALIZE(Q)
begin
for any i,j,k ∈ N do
for any x ∈ Di,y ∈ Djsuch that cij(x,y) do
if there is no z ∈ Dksuch that cik(x,z) ∧ ckj(z,y)
then
cij(x,y) ← false;
cji(y,x) ← false;
Q ← Q ∪ {(i,x),j} ∪ {(j,y),i}
else ResumePoint((i,x),(j,y),k) ← z
end
Figure 4: Algorithm of Path Consistency Enforcing
By using a similar analysis to the proof of theorem 2, we
have the following result.
Theorem 3 The time complexity of the algorithm PC is
O(n3d3) with space complexity O(n3d2).
The time complexity and space complexity of the PC algo-
rithm here are the same as the best known theoretical results
[Singh, 1996].
4
In this paper, we present some preliminary experimental re-
sults on the efficiency of AC-3. While arc consistency can be
Preliminary experimental results
procedure REVISE PC( (i,x),k,Q)
begin
for any j ∈ N,k ?= i,k ?= j do
for any y ∈ Djsuch that cij(x,y) do
z ← ResumePoint((i,x),(j,y),k);
while not ((z ?= NIL) ∧ cik(x,z) ∧ ckj(z,y)
do
z ← succ(z,D0
if not ((cik(x,z) ∧ ckj(z,y)) then
Q ← Q ∪ {((i,x),j} ∪ {((j,y),i)}
else ResumePoint((i,x),(j,y),k)) ← z
endfor
end
k);
Figure 5: Revision procedure for PC algorithm
applied in the context of search (such as [Bessiere and Re-
gin, 1996]), we focus on the performance statistics of the arc
consistency algorithms alone.
The experiments are designed to compare the empirical
performance of the new AC-3.1 algorithm with both the clas-
sical AC-3.0 algorithm and a state-of-the-art algorithm on a
range of CSP instances with different properties.
There have been many experimental studies on the perfor-
mance of general arc consistency algorithms[Wallace, 1993;
Bessiere, 1994; Bessiere et al., 1999]. Here, we adopt the
choice of problems used in [Bessiere et al., 1999], namely
somerandomCSPs, RadioLinkFrequencyAssignmentprob-
lems (RLFAPs) and the Zebra problem. The zebra problem is
discarded as it is too small for benchmarking. Given the ex-
perimental results of [Bessiere et al., 1999], AC-6 is chosen
as a representative of a state-of-the-art algorithm because of
its good timing performance over the problems of concern. In
addition, an artificial problem DOMINO is designed to study
the worst case performance of AC-3.
Randomly generated problems: As in [Frost et al.,
1996], a random CSP instance is characterized by n,d,e
and the tightness of each constraint.
constraint cijis defined to be |Di× Dj| − |cij|, the num-
ber of pairs NOT permitted by cij.
ated CSP in our experiments is represented by a tuple (n,
d, e, tightness). We use the first 50 instances of each
of the following random problems generated using the ini-
tial seed 1964 (as in [Bessiere et al., 1999]): (i) P1: un-
der constrained CSPs (150,50,500,1250) where all gener-
ated instances are already arc consistent; (ii) P2: over con-
strained CSPs (150,50,500,2350) where all generated in-
stances are inconsistent in the sense that some domain be-
comes empty in the process of arc consistency enforcing; and
(iii) problems in the phase transition [Gent et al., 1997] P3:
(150,50,500,2296) and P4: (50,50,1225,2188). The P3
and P4 problems are further separated into the arc consistent
instances, labeled as ac, which can be made arc consistent
at the end of arc consistency enforcing; and inconsistent in-
stances labeled as inc. More details on the choices for P1 to
P4 can be found in[Bessiere et al., 1999].
RLFAP:TheRLFAP
[Cabon
toassignfrequenciesto
avoid interference.We use the CELAR instances
The tightness of a
A randomly gener-
etal.,1999]
links
is
tocommunication
Page 4
of RLFAP which are real-life problems available at
ftp://ftp.cs.unh.edu/pub/csp/archive/code/benchmarks.
DOMINO: Informally the DOMINO problem is an undi-
rected constraint graph with a cycle and a trigger constraint.
The domains are Di = {1,2,···,d}. The constraints are
C = {ci(i+1)| i < n} ∪ {c1n} where c1n = {(d,d)} ∪
{(x,x + 1) | x < d} is called the trigger constraint and
the other constraints in C are identity relations. A DOMINO
problem instance is characterized by two parameters n and
d. The trigger constraint will make one value invalid during
arc consistency and that value will trigger the domino effect
on the values of all domains until each domain has only one
value d left. So, each revision of an arc in AC-3 algorithms
can only remove one value while AC-6 only does the neces-
sary work. This problem is used to illustrate the differences
between AC-3 like algorithms and AC-6. The results explain
why arc revision oriented algorithms may not be so bad in the
worst case as one might imagine.
AC-3.0
100,010
0.65
494,079
1.11
2,272,234
2.73
3,428,680
4.31
3,427,438
3.75
5,970,391
8.99
AC-3.1
100,010
0.65
475,443
1.12
787,151
1.14
999,708
1.67
1,327,849
1.70
1,842,210
3.63
AC-6
100,010
1.13
473,694
1.37
635,671
1.18
744,929
1.69
1,022,399
1.86
1,236,585
3.54
P1#ccks
time(50)
#ccks
time(50)
#ccks
time(25)
#ccks
time(25)
#ccks
time(21)
#ccks
time(29)
P2
P3(ac)
P3(inc)
P4(ac)
P4(inc)
Table 1: Randomly generated problems
RFLAP
#3
AC-3.0
615,371
1.47
1,762,565
4.27
3,575,903
8.11
971,893
2.26
AC-3.1
615,371
1.70
1,519,017
3.40
2,920,174
6.42
971,893
2.55
AC-6
615,371
2.46
1,248,801
5.61
2,685,128
8.67
971,893
3.44
#ccks
time(20)
#ccks
time(20)
#ccks
time(20)
#ccks
time(20)
#5
#8
#11
Table 2: CELAR RLFAPs
Some details of our implementation of AC-3.1 and AC-3.0
are as follows. We implement domain and related operations
by employing a double-linked list. The Q in AC-3 is imple-
mented as a queue of nodes on which arcs incident will be
revised[Chmeiss and Jegou, 1996]. A new node will be put
attheend ofthequeue. Constraintsinthe queueare revisedin
a FIFO order. The code is written in C++ with g++. The ex-
periments are run on a Pentium III 600 processor with Linux.
dAC-3.0
17,412,550
5.94
136,325,150
43.65
456,737,750
142.38
AC-3.1
1,242,550
0.54
4,985,150
2.21
11,227,750
5.52
AC-6
747,551
0.37
2,995,151
1.17
6,742,751
2.69
100#ccks
time(10)
#ccks
time(10)
#ccks
time(10)
200
300
Table 3: DOMINO problems
For AC-6, we note that in our experiments, using a sin-
gle currently supported list of a values is faster than using
multiple lists with respect to related constraints proposed in
[Bessiere et al., 1999]. This may be one reason why AC-7 is
slower than AC-6 in[Bessiere et al., 1999]. Our implementa-
tion of AC-6 adopts a single currently supported list.
The performance of arc consistency algorithms here is
measured along two dimensions: running time and number
of constraint checks (#ccks). A raw constraint check tests if
a pair (x,y) where x ∈ Diand y ∈ Djsatisfies constraint
cij. In this experiment we assume constraint check is cheap
and thus the raw constraint and additional checks (e.g. line 1
in Figs 3) in both AC-3.1 and AC-6 are counted. In the tabu-
lated experiment results, #ccks represents the average number
of checks on tested instances, and time(x) the time in seconds
on x instances.
The results for randomly generated problems are listed in
Table 1. For the under constrained problems P1, AC-3.1 and
AC-3.0 have similar running time. No particular slowdown
for AC-3.1 is observed. In the over constrained problems P2,
the performance of AC-3.1 is close to AC-3.0 but some con-
straintchecksaresaved. Inthehardphasetransitionproblems
P3 and P4, AC-3.1 shows significant improvement in terms
of both the number of constraint checks and the running time,
and is better than or close to AC-6 in timing.
The results for CELAR RLFAP are given in Table 2. In
simple problems, RLFAP#3 and RLFAP#11, which are al-
ready arc consistent before the execution of any AC algo-
rithm, no significant slowdown of AC-3.1 over AC-3.0 is ob-
served. For RLFAP#5 and RLFAP#8, AC-3.1 is faster than
both AC-3.0 and AC-6 in terms of timing.
The reason why AC-6 takes more time while making less
checks can be explained as follows. The main contribution
to the slowdown of AC-6 is the maintenance of the currently
supportedlistofeachvalueofalldomains. Inordertoachieve
space complexity of O(ed), when a value in the currently
supported list is checked, the space occupied in the list by
that value has to be released. Our experiment shows that the
overhead of maintaining the list doesn’t compensate for the
savings from less checks under the assumption that constraint
checking is cheap.
The DOMINO problem is designed to show the gap be-
tween AC-3 implementations and AC-6. Results in Table 3
show that AC-3.1 is about half the speed of AC-6. This can
be explained by a variation of the proof in section 3, in AC-
3.1 the time spent on justifying the validity of a value with
respect to a constraint is at most 2d while in AC-6 it is at most
Page 5
d. The DOMINO problem also shows that AC-3.0 is at least
an order of magnitude slower in time with more constraint
checks over AC-3.1 and AC-6.
In summary, our experiments on randomly generated prob-
lemsandRLFAPsshowthenewapproachtoAC-3hasasatis-
factory performance on both simple problems and hard prob-
lems compared with the traditional view of AC-3 and state-
of-the-art algorithms.
5Related work and discussion
Some related work is the development of general purpose arc
consistency algorithms AC-3, AC-4, AC-6 , AC-7 and the
work of[Wallace, 1993]. We summarize previous algorithms
before discussing how this paper gives an insight into AC-3
as compared with the other algorithms.
An arc consistency algorithm can be classified by its
method of propagation. So far, two approaches are employed
in known efficient algorithms: arc oriented and value ori-
ented. Arc oriented propagation originates from AC-1 and its
underlying computation model is the constraint graph. Value
oriented propagation originates from AC-4 and its underlying
computation model is the value based constraint graph.
Definition 4 The value based constraint graph of a CSP(N,
D, C) is G=(V, E) where V = {i.x | i ∈ N,x ∈ Di} and
E = {{i.x,j.y} | x ∈ Di,y ∈ Dj,cij∈ C}.
Thus a more rigorous name for the traditional constraint
graph may be variable based constraint graph. The key idea
of value oriented propagation is that once a value is removed
only those values related to it will be checked. Thus it is more
fine grained than arc oriented propagation. Algorithms work-
ing with variable and value based constraint graph are also
called coarse grained algorithms and fine grained algorithms
respectively. Animmediateobservationisthatcomparedwith
variable based constraint graph, time complexity analysis in
value based constraint graph is straightforward.
Given a computation model of propagation, the algorithms
differ in the implementation details. For variable based con-
straint graph, AC-3 [Mackworth, 1977] is an “open imple-
mentation”. The approach in[Mackworth and Freuder, 1985]
can be regarded as a realized implementation. The new view
of AC-3 presented in this paper can be thought of as another
implementationwithoptimalworstcasecomplexity. Ournew
approach simply remembers the result obtained in previous
revision of an arc while in the old one, the choice is to be
lazy, forgetting previous computation. Other approaches to
improving the space complexity of this model is [Chmeiss
and Jegou, 1996]. For value based constraint graph, AC-4 is
the first implementation and AC-6 is a lazy version of AC-4.
AC-7 is based on AC-6 and it exploits the bidirectional prop-
erty that given cij,cjiand x ∈ Di,y ∈ Dj, cij(x,y) if and
only if cji(y,x).
Another aspect is the general properties or knowledge of a
CSP which can be isolated from a specific arc consistency en-
forcing algorithm. Examples are AC-7 and AC-inference. We
note that the idea of metaknowledge [Bessiere et al., 1999]
can be applied to algorithms of both computing models. For
example, in terms of the number of raw constraint checks,
the bidirectionality can be employed in coarse grained algo-
rithm, eg. in [Gaschnig, 1978], however it may not be fully
exploited under the variable based constraint graph model.
Other propagation heuristics [Wallace, 1992] such as propa-
gating deletion first[Bessiere et al., 1999]are also applicable
to algorithms of both models. This is another reason why we
did not include AC-7 in our experimental comparison.
We have now a clear picture on the relationship between
the new approach to AC-3 and other algorithms. AC-3.1 and
AC-6 are methodologically different. From a technical per-
spective, the time complexity analysis of the new AC-3 is
different from that of AC-6 where the worst case time com-
plexity analysis is straightforward. The point of commonality
between the new AC-3 and AC-6 is that they face the same
problem: the domain may shrink in the process of arc con-
sistency enforcing and thus the recorded information may not
be always correct. This makes some portions of the new im-
plementation of the AC-3.1 similar to AC-6. We remark that
the proof technique in the traditional view of AC-3 does not
directly lead to the new AC-3 and its complexity results.
The number of raw constraint checks is also used to eval-
uate practical efficiency of CSP algorithms. In theory, apply-
ing bidirectionality to all algorithms will result in a decrease
of raw constraint checks. However, if the cost of raw con-
straint check is cheap, the overhead of using bidirectionality
may not be compensated by its savings as demonstrated by
[Bessiere et al., 1999].
It can also be shown that if the same ordering of variables
and values are processed, AC-3.1 and the classical AC-6 have
the same number of raw constraint checks. AC-3.0 and AC-
4 will make no less raw constraint checks than AC-3.1 and
AC-6 respectively.
AC-4 does not perform well in practice [Wallace, 1993;
Bessiere et al., 1999]because it reaches the worst case com-
plexity both theoretically and in actual problem instances
when constructing the value based constraint graph for the
instance. Other algorithms like AC-3 and AC-6 can take ad-
vantage of some instances being simpler where the worst case
doesn’t occur. In practice, both artificial and real life prob-
lems rarely make algorithms behave in the worst case except
for AC-4. However, the value based constraint graph induced
from AC-4 provides a convenient and accurate tool for study-
ing arc consistency.
Given that both variable and value based constraint graph
can lead to worst case optimal algorithms, we consider their
strength on some special constraints: functional, monotonic
and anti-functional. For more details, see [Van Hentenryck
et al., 1992]and[Zhang and Yap, 2000].
For coarse grained algorithms, it can be shown that for
monotonic and anti-monotonic constraints arc consistency
can be done with complexity of O(ed) (eg. using our new
view of AC-3). With fine grained algorithms, both AC-4 and
AC-6 can deal with functional constraints. We remark that
the particular distance constraints in RLFAP can be enforced
to be arc consistent in O(ed) by using a coarse grained algo-
rithm. It is difficult for coarse grained algorithm to deal with
functional constraints and tricky for fine grained algorithm to
monotonic constraints.
In summary, there are coarse grained and fine grained al-