Hybrid evolutionary algorithms for constraint satisfaction problems: memetic overkill?
ABSTRACT We study a selected group of hybrid EAs for solving CSPs, consisting of the best performing EAs from the literature. We investigate the contribution of the evolutionary component to their performance by comparing the hybrid EAs with their "de-evolutionarised" variants. The experiments show that "de-evolutionarising" can increase performance, in some cases doubling it. Considering that the problem domain and the algorithms are arbitrarily selected from the "memetic niche", it seems likely that the same effect occurs for other problems and algorithms. Therefore, our conclusion is that after designing and building a memetic algorithm, one should perform a verification by comparing this algorithm with its "de-evolutionarised" variant.
- 01/1995; Wiley., ISBN: 978-0-471-57148-3
- 01/2003; Springer.
- 01/1966; Wiley.
Hybrid Evolutionary Algorithms for Constraint Satisfaction Problems:
10 Colinton Road
Edinburgh, EH10 5DT
Vrije Universiteit Amsterdam
De Boelelaan 1081a
1081 HV, Amsterdam
Abstract- We study a selected group of hybrid EAs
for solving CSPs, consisting of the best performing
EAs from the literature.
bution of the evolutionary component to their per-
formance by comparing the hybrid EAs with their
“de-evolutionarised” variants. The experiments show
that “de-evolutionarising” can increase performance, in
some cases doubling it. Considering that the problem
domain and the algorithms are arbitrarily selected from
the “memetic niche”, it seems likely that the same effect
occurs for other problems and algorithms. Therefore,
our conclusion is that after designing and building a
memetic algorithm,one should performaverificationby
comparing this algorithm with its “de-evolutionarised”
We investigate the contri-
During the last decade, many researchers have adopted the
use of heuristics within an evolutionary algorithm (EA) be-
cause of the positive effect on algorithm performance. Ad-
vocated already in the mid 90ies (cf. ), such algorithms,
called hybrid EAs or memetic algorithms, offer the best of
both words: the robustness of the EA because of the un-
biased population-based search and the directed search im-
plied by the heuristic bias. As for algorithm performance,it
is assumed and expected that the hybrid EA performs bet-
ter than the EA alone and the heuristic alone. Supported
by significant practical evidence, the contemporary view
within the EC community considers this memetic approach
the most successful in treating challenging (combinatorial)
In this paper we add a critical note to this opinion. In
particular, we design and perform targeted experiments to
assess the contribution of the evolutionary component of
hybrid EAs to good results. The way to test this is to “de-
evolutionarise” the EAs and see whether the results get bet-
ter or worse. Technically speaking the question is how to
“remove evolution” from an EA. For a solid answer one
should identify the essential features of EAs for which it
holds that after removing or switching off these features,
the resulting algorithm would not qualify for being evolu-
tionary. To this end, there are three obvious candidates for
belonging to these essential features: Namely, the usage of:
• a population of candidate solutions;
• variation operators, crossover and mutation; and
• natural selection, that is selection based on fitness.
Our work, as reported here, is based on the third answer
for the following reasons. Considering the role of the pop-
ulation, it is true that, in general, EAs use a population of
more than one candidate solution. However, there are many
successful variants, where the population size is only one,
think for instance of evolution strategies [2, 10, 27].
As for the variation operators, we can observe that some
move-operator in the search space is always necessary for
generate-and-test methods. Hence, the sole presence of the
variationoperatorsis not EA specific. The specialityof EAs
is often related to the use of crossover for mixing informa-
tion of two or more candidate solutions. Nevertheless, there
recombination, for instance in evolutionary programming,
cf. [10, 15, 14].
Considering natural selection, recall that there are two
selection steps in the general EA framework: parent selec-
tion and survivor selection. For either of them we say that
it represents natural selection if a fitness-based bias is in-
corporated. Note, that an EA does not need to have natural
selection in both steps. For instance, generational GAs use
only parent selection (and all children survive), while ES
use only survivor selection (and parents are chosen uniform
randomly). However, an EA must have fitness-bias in at
least one of these steps. If neither parent selection nor sur-
vivor selection are performed by using fitness-bias (e.g., by
uniform random selection) then we have no natural selec-
tion and obtain random walk.
Based on these considerations we “de-evolutionarise”
the EAs by switching off natural selection. Technically, we
set all selection operators to uniform random, that is, can-
didate solutions are selected by selecting them randomly
while each candidate solution has an equal probability of
2 CSPs and our generator
The ConstraintSatisfactionProblem (CSP) is a well-known
satisfiability problem that is NP-complete ().
mally, the CSP is defined as a set of variables X and a set
constraints C between these variables. Variables are only
assigned values from their respective domains, denoted as
D. Assigning a value to a variable is called labelling a vari-
able and a label is a variable-value pair, denoted: ?x,d?.
The simultaneous assignment of several values to their vari-
ables is called a compound label. A constraint is then a set
of compound labels, this set determines when a constraint
is violated. If a compound label is not in a constraint, it
satisfies the constraint. A compound label that violates a
constraint is called a conflict. A solution of the CSP is de-
fined as the compoundlabel containing all variables in such
a way that no constraint is violated. The number of distinct
variablesin the compoundlabels of a constraintis called the
arity of the constraint and these variables as said to be rel-
evant to the constraint. The arity of a CSP is the maximum
arity of its constraints, this is denoted with the letter k.
In this paper we consider only CSPs with an arity of
two (k = 2), called binary CSPs. All constraints of a bi-
nary CSP have arity two. Although the restriction to binary
constraints appears to be a serious limitation to the CSP,
E. Tsang showed that every CSP can be transformed to an
equivalent binary CSP (). Two methods for translat-
ing CSPs have been proposed: the dual graph translation
() and the hidden variable translation (). Both meth-
ods were discussed in  and it was found that the choice
of the transformation method had a large impact on the per-
formanceof the algorithm used to solve the resulting binary
CSPs. However, in this paper, we will used randomly gen-
erated binary CSPs so this problem does not affect the pre-
In this paper we will consider CSPs with a uniform do-
main size only. The number of variables and the uniform
domain size of the CSP are two complexity measures of
the CSP. They are denoted with n and m respectively. The
larger the number of variables and/or the larger the uniform
domain size, the more difficult the CSP will be to solve.
There are two more complexity measures that will be used:
density and average tightness. Density is defined as the ra-
tio between the maximum number of constraints of a CSP
for a binary CSP) and the actual number of con-
straints (|C|) and is denoted as a real number between 0.0
and 1.0, inclusive. The tightness of a constraint is one mi-
nus the ratio between the maximum number of compound
labels possible (|Dx×Dy| for a binary constraint over vari-
ables x and y) and the actual number of compound labels
in the constraint. The average tightness of a CSP is then
the average tightness of all constraints in the CSP. Density
is denoted as p1and average tightness as p2. All four com-
plexity measures together form the parameter vector of a
an important driving force behind the study of CSPs. The
lack of a good set of CSP-instances was seen as a major ob-
stacle and has lead to research in ways of generating these
randomly. It was soon realised that an algorithm that solves
a particularset ofCSP-instances efficientlymay have disap-
pointing performance on other CSP-instances. This in turn
lead to research on how to produce sets of randomly gener-
ated CSP-instances that qualify as a reasonable representa-
tion of the whole class.
In the last two decades, several models for randomly
These models use some or all parameters of the parameter
vector of a CSP to control the complexity of the instances
generated. By analysing the performance of algorithms on
instances generated with different parameter settings, the
behaviourof the algorithms throughoutthe parameter space
of the CSP can be studied. A set of CSP-instances for em-
pirically testing the performance of an algorithm is called a
Simply put, generating a CSP-instance involves choos-
ing which constraint to add to the instance and which com-
pound labels to remove from these constraints. Two meth-
probability-method. In the ratio-method a predetermined
ratio of constraints are added to the CSP and a predeter-
mined ratio of compound labels is then added to these con-
straints (constraints are assumed to be initialised empty).
These ratios are based on the p1and p2parameters of the
CSP respectively. The probability-method considers each
constraint and each compound label in the constraint sepa-
rately and, based on the p1parameterfor the constraints and
the p2parameter for the compound labels, determines if it
is added to the CSP. In the end there are two methods for
adding constraints and two methods for adding compound
labels to these constraint. These can be combined into four
models for generating CSP-instances randomly, called A,
B, C, and D ([23, 19]).
In  it was found that when the number of variables
(n) of a CSP is large, almost all instances generated by
flawed variables. A flawed variable is a variable for which
all values in its domain violate a relevant constraint. This is
theresultofmodelsA toD’s two-stepapproachforgenerat-
ing CSP-instances. To overcome this unwanted behaviour,
a new model, called model E, was introduced. Model E
combinesbothsteps andgeneratesCSP-instances byadding
?m2compound labels out of the?n
ones. The peparameter of model E is then a combination
of the p1and p2parameters of models A to D. However,
in , it was found that even for small values of pe(e.g.
pe< 0.05),all possible constraints of the CSP-instance will
have been added by the model E generator. In the same pa-
per, a new model, model F, was proposed, in which first a
modelE generatorwas usedto generatea CSP-instance and
then a number of constraints are removed (using the ratio-
method). Theparametervectorofthe modelF randomCSP
generatoris then: ?n,m,p1,pe?. Becausethegeneratoruses
the peparameter of model E and because some compound
labelswill beremovedas well, someexperimentaltweaking
of the peparameteris neededto generateCSP-instance with
a certain p2value.
3 EAs for solving CSPs
In the last two decades many EAs have been proposed for
solving the CSP ([8, 9, 25, 24, 20, 16, 21, 7, 11]). In ,
the performance of a representative sample of these EAs
was compared on a large testset of CSP-instances gener-
ated via model E. In , another comparison of a larger
Ordered set of values
no. violated constraints SAW
Permutation of variables
Evolutionary Model Steady state
Ordered set of values
no. violated constraints Special
Biased Ranking Biased Ranking
Table 1: Characteristics of the HEA, LSEA, ESPEA, and the rSAWEA
number of these EAs, including a larger number of algo-
rithm variants, was included and this time the comparison
was done on a testset that was generated with a model
F generator. In  it was found that the Heuristic EA
(HEA), the Local-Search EA (LSEA), the Eliminate-Split-
PropagateEA, andthe Stepwise-Adaptation-of-WeightsEA
(rSAWEA) outperformed all the other EAs.
Space limitations preclude us to include a full descrip-
tion of these four algorithms but  describes these algo-
rithms fully and the original articles of the authors of these
algorithms can be used as well: [8, 9] for HEA,  for
LSEA,  for ESPEA, and [12, 13] for SAWEA.1A ta-
ble showing the characteristics for these four algorithms is
included in table 1.
4 Experimental setup
As stated in the introduction,
evolutionarise the HEA, LSEA, ESPEA, and rSAWEA by
removing natural selection.
mented in the two selection operatorsof the EAs: the parent
selectionoperatorandthe survivorselectionoperator. Tore-
move natural selection, both operators have to be changed.
This is done by uniform randomly selecting parents for off-
spring in the parent selection operator and by uniform ran-
domly selecting the survivors that will be added to the new
population in the survivor selection operator. By using uni-
form selection in both operators, no bias is applied through
selection and in theory, the EAs should perform a random
walk through the search space.
To show the differencebetweenthe performancein these
experiments we will run two experiments for all four algo-
rithms and show their results back-to-back.
we propose to de-
Natural selection is imple-
For the experiments in this paper we use the same testset
as in . The testset consists only of model F generated
solvable CSP-instances and each instance has 10 variables
(n = 10), and a uniform domain size of 10 (m = 10).
For nine density-tightness combinations in the so called
1One technical note on this latter algorithm, however, is necessary.
Here we use a slightly modifi ed version of the original SAWEA, where
for each variable the domain is randomly shuffled before the decoder is
applied. We denote this algorithm by rSAWEA. A full description of this
paper can be found in 
mushy region, 25 CSP-instances were selected from a pop-
ulation of 1000 generated CSP-instances. The mushy re-
gion is a region in the density-tightness parameter space
able to being unsolvable. The mushy region can be deter-
mined exactly by calculating the number of solutions, us-
ing a formula provided by Smith in : mn(1 − p2(n
(for binary CSPs).Smith predicted that the mushy re-
gion can be found were the number of solutions of the
generated CSPs would be one, assuming that this solution
will be hard to find among all other possible compound
labels. The nine density-tightness combinations used are
1 : (0.1,0.9), 2 : (0.2,0.9), 3 : (0.3,0.8), 4 : (0.4,0.7),
5 : (0.5,0.7), 6 : (0.6,0.6), 7 : (0.7,0.5), 8 : (0.8,0.5),
and 9 : (0.9,0.4). We identify the density-tightness combi-
nations in the mushy region by the numbers given above.
in four steps: parameter adjustment, sample sizing, formula
correction, and instance selection. In the parameter adjust-
ment step, a sample of CSP-instances are generated and the
parametersusedto generatethese instances are comparedto
the complexitymeasures calculatedfor these instances. The
parameters are adjusted to remove any difference between
the parameters and the complexity measures. In the sample
sizing step, the size of the CSP-instance sample is deter-
mined by comparingthe foundaverage number of solutions
in the sample with the calculated number of solutions from
Smith’s formula. The size of the sample is increased when
the difference between the two is significant, with a (practi-
cal) maximum of 1000 instances for each density-tightness
combination. In the formula correction step the calculated
number of solutions is corrected for any remaining differ-
ence. For the instance selection step, a new sample of only
solvableCSP instancesequaltothesize ofthesamplesizing
step is generated. For each density-tightnesscombinationin
set that are the closest to the corrected number of solutions
found in the formula correction step.
In total the testset includes 9 · 25 = 225 CSP instances.
The testset can be downloaded at:
4.2 Performance measures
Three measures are used to measure the performanceof the
algorithms in this paper: the success rate (SR), the average
number of evaluations to solution (AES), and the average
number of conflict checks to solution (ACCS). The SR will
be used to describe the effectiveness of the algorithms, the
AES and ACCS will be used to describe the efficiency of
The SR measure is calculated by dividing the number of
successful runs, that is the number of runs in which the al-
gorithm found a solution to the CSP, by the total number of
runs. The measure is given as a percentage, 100% mean-
ing all runs were successful. The SR is the most important
performance measure to compare two algorithms with. An
algorithm with a higher SR finds more solutions than an al-
gorithm with a lower SR. The accuracy of the SR measure
is influenced by the total number of runs.
The AES measure is defined as the average number of
fitness evaluations needed by an algorithm over all success-
ful runs. If a run is unsuccessful, it will not show in the
AES measure, if all runs are unsuccessful (SR=0), the AES
is undefined. The AES measure is a secondary measure for
comparing two algorithms and its accuracy is affected by
the number of successful runs of an algorithm. It should
be noted that counting fitness evaluations is a standard way
of measuring efficiency on EC. However, in our case, much
work performed by the heuristics remains hidden from this
measure, for instance by being done in a mutation operator.
This motivates the usage of the third measure.
The ACCS measure is calculated by the average number
of conflict checks needed by an algorithm over all success-
ful runs. A conflict check is the check made to see if a
certain compound label is in a constraint. As with the AES
measure, the ACCS measure is undefined when all runs are
unsuccessful and its accuracy is affected by the number of
successful runs of an algorithm. The ACCS measure is a
more fine grained measure than the AES and also measures
the so-called hidden work done by the algorithm.
4.3 EA setup
The EAs were setup with as little difference between the
parameter setups as possible. Table 2 shows the parameter
setup of all four algorithms. All algorithms use a popula-
tion of 10 individuals, from with 10 individuals are selected
using a biased ranking parent selection operator with a bias
of 1.5. The HEA, LSEA, and ESPEA have a crossover op-
erator which is always applied (crossover rate). The HEA,
LSEA, and rSAWEA need extra parameters, the values for
these parameters are shown in table 2. How these extra pa-
rameters are used can be seen in  or in the original papers
of these algorithms. The ESPEA does not have any extra
5 Results and analysis
The results of the experiments are summarised in Tables 3,
4, 5, and 6, for the HEA, LSEA, ESPEA, and rSAWEA,
LSEA ESPEA rSAWEA
Max. Evaluations 100000 100000 100000 100000
HEA no. Variables 3
HEA no. Parents
LS Add Rate
LS Remove Rate
LS Delete Rate
Table 2: Parameter setup of the HEA, LSEA, ESPEA, and
respectively. The results for the real hybrid EAs are given
in the left half of these tables. These figures show great
differences between the algorithms. For example, on in-
stance number6 the success rates vary between poor (HEA:
44%), medium (75% and 80% for LSEA and ESPEA), and
excellent (rSAWEA: 100%). The same holds for AES val-
ues, where the differences can be of a factor 10, e.g., HEA
around one thousand, where LSEA is in the range of ten
HEA w/o selection
Table 3: Performance of HEA and HEA without selection.
SR in percentages, ACCS in thousands, rounded up
These results indicate a clear looser and a clear winner.
TheHEA is obviouslythe least performingalgorithm,it can
find a solution in fewer runs than the others, shown clearly
by a lower SR. The LSEA and the ESPEA are quite close
to each other in this respect, their success rates do not differ
so much. Furthermore, on three out of the nine instances
(1,7,9) they have an identical SR, and on the other six the
two algorithms score the same: LSEA wins three times (in-
stances 2,3,5) and so does ESPEA (instances 4,6,8). We
could distinguish the two algorithms based on their effi-
ciency: ESPEA is able to archive these success rates with
less computational effort (AES and ACCS). The rSAWEA
algorithm is the clear winner here as it can solve almost
every problem instance and the amount of work it needs for
this is significantlyless thanwhatthe otheralgorithmsneed.
LSEA w/o selection
Table 4: Performance of LSEA and LSEA without selec-
tion. SR in percentages, ACCS in thousands, rounded up
Analysing the results from the perspective of our
main research goal we can observe rather surprising out-
comes. The comparison of the hybrid EAs and their de-
evolutionarisedvariants discloses that HEA and ESPEA be-
come better if we do not use fitness information anywhere,
i.e., neither within parent selection, nor within survivor se-
lection. These results are almost ironic, considering that
both algorithmsoriginatefrom a pure EA, wherethe heuris-
tics are the extra add-ons to improve the base algorithms.
However, as the experiments show, the add-oncan be worth
more than the main algorithm. Or, turning the argument
around, we could say that natural selection is only harmful
here. The case of the LSEA is also somewhat surprising in
that the results of the two algorithmvariantsare fully identi-
cal. Inthis case,fitness-biasintheselectionoperatorsseems
to have no effect at all. The rSAWEA shows a different pic-
ture. Removing the evolution from this EA compromises
performance. In terms of success rates the effects are not
too negative, 1 % decrease maximum (on 3 instances), and
ant without natural selection is slower, in terms of AES as
well as in terms of ACCS. These results, that is, the fact
that de-evolutionarising rSAWEA makes it worse, indicate
that the good performance of the rSAWEA is not simply
the consequence of using a strong heuristic that exploits the
properties of CSPs. The rSAWEA is actually very generic,
it is only the decoder where a weak heuristic is applied: if
all possible values for a variable would cause constraint vi-
olation, the variable is left unassigned. For this reason it
is quite plausible that the rSAWEA is so successful because
of thecombination ofthe weak heuristicin the decoder(for
deep search) and the adaptive fitness function in the SAW-
mechanism (for wide search). This latter enables the algo-
rithm to emphasise different constraints in different stages
Considering the results from the pure problem solving
perspective we need to compare all eight algorithms based
on their performance. The overall winner is then the ES-
PEA without natural selection, beating the really evolution-
ary rSAWEA. Their success rates are not that different, the
rSAWEA looses only on 3 instances and only by a small
margin. However, the de-evolutionarised ESPEA is much
faster in terms of fitness evaluations (AES). It is also faster
in terms of conflict checks (ACCS), but the differences re-
garding this measure are not that big.
ESPEA w/o selection
Table 5: Performanceof ESPEA and ESPEA without selec-
tion. SR in percentages, ACCS in thousands, rounded up
rSAWEA w/o selection
Table 6: Performance of rSAWEA and rSAWEA without
selection. SR in percentages, ACCS in thousands, rounded
In this paper we have compared the best four heuristic EAs
for solving randomly generated binary CSPs on instances
from the mushy region. Such heuristic EAs, or memetic al-
gorithms, supposedly obtain their good performance from
two sources: the evolutionary,and the heuristic component.
In order to assess the contribution of the evolutionary com-
ponent, we also implemented a de-evolutionarised version
of all of these EAs and tested them on the same test suite.
We de-evolutionarised EAs by removing any fitness-based
bias from selection and making all choices based on draw-
ings from a uniform distribution. The results showed that
two EAs became better, one became worse, and one re-
mained the same. The overall winner of the whole pool
of algorithms turned out to be one where natural selection
was switched off. In this case it can be argued that the
algorithm is not evolutionary at all. These outcomes hint
on a “memetic overkill” in the sense that adding too much
heuristics to an EA to increase its performance might make
evolutionary component of the hybrid EA or memetic algo-
A remaining question is the possible role of the popula-
tion. As we listed in the Introduction, there are more op-
tions for removing the evolution from an EA. In particular,
one could set the population size at one (and consequently
get rid of crossoveras a variation operator). Testing this op-
tion could show if usingthe heuristics in a population-based
manner offers advantages over simply using them in an it-
erative improvement scheme. This could shed further light
on the issue of memetic overkill.
In summary, here we have shown that simply de-
evolutionarisinga hybridEA can greatly increase its perfor-
mance. This means that, even though one arrived to the al-
algorithm variant is not necessarily evolutionary. Strictly
speaking, we have observedthis effect only on one problem
(randomly generated binary CSPs) and a few algorithms,
hence we cannot simply generalise our findings without a
risk. However, considering that the problem domain and
the algorithms are arbitrarily selected from the “memetic
niche”, it seems very likely that the same effect occurs for
that after designing and building a memetic algorithm, one
should always performa verification step by comparingthis
algorithm with its de-evolutionarised variant.
 F. Bacchus and P. van Beek. On the conversion be-
tween non-binary and binary constraint satisfaction
problems. In Proceedings of the 15th International
Conference on Artificial Intelligence – ICAI98, pages
311–318, Madison, Wisconsin, July 1998. Morgan
 T.B¨ ack. EvolutionaryAlgorithmsin TheoryandPrac-
tice. Oxford University Press, New York, NY, 1996.
 B.G.W. Craenen.
Problems with EvolutionaryAlgorithms. Doctoral dis-
sertation, Vrije Universiteit Amsterdam, Amsterdam,
The Netherlands, 2005. In Press.
Solving Constraint Satisfaction
 B.G.W. Craenen, A.E. Eiben, and J.I. van Hemert.
Comparing evolutionary algorithms on binary con-
straint satisfaction problems. IEEE Transactions on
Evolutionary Computing, 7(5):424–445,Oct 2003.
 R. Dechter. On the expressiveness of networks with
hidden variables. In T. Dietterich and W. Swartout,
editors, Proceedings of the 8th National Conference
on Artificial Intelligence, pages 556–562, Hynes Con-
vention Centre, 1990. MIT Press.
 R. Dechter and J. Pearl.
Tree clustering for con-
Artificial Intelligence, 38(3):353–
2That is, starting with a general EA and adding heuristics to it for in-
creasing its performance.
 G. Dozier, J. Bowen, and D. Bahler. Solving small
and large constraint satisfaction problems using a
heuristic-based micro-genetic algorithm. In ICEC94
, pages 306–311.
 A.E. Eiben, P-E. Rau´ e, and Zs. Ruttkay. Heuristic ge-
netic algorithmsforconstrainedproblems,parti: Prin-
ciples. Technical Report IR-337, Vrije Universiteit
 A.E. Eiben, P-E. Rau´ e, and Zs. Ruttkay. Solving con-
straint satisfaction problems using genetic algorithms.
In ICEC94 , pages 542–547.
 A.E. Eiben and J.E. Smith. Introduction to Evolution-
ary Computing. Springer,2003. ISBN 3-540-40184-9.
 A.E. Eiben and J.K. van der Hauw. Adaptive penal-
ities for evolutionary graph-coloring. In J.-K. Hao,
E. Lutton, E. Ronald, M. Schoenauer, and D. Snyers,
editors, Artificial Evolution ’97 – AE97, volume 1363
of Lecture Notes on Computer Science, pages 95–106.
Springer-Verlag, Berlin, 1998.
 A.E. Eiben, J.K. van der Hauw, and J.I. van Hemert.
Graph coloringwith adaptiveevolutionaryalgorithms.
Journal of Heuristics, 4(1):25–46, 1998.
 A.E. Eiben and J.I. van Hemert. Saw-ing eas: Adapt-
ing the fitness function for solving constrained prob-
lems. In D. Corne, M. Dorigo, and F. Glover, editors,
New Ideas in Optimization, pages 389–402. McGraw-
 D.B. Fogel. Evolutionary Computation. IEEE Com-
puter Society Press, 1995.
 L.J. Fogel, A.J. Owens, and M.J. Walsh. Artificial In-
telligence throughSimulatedEvolution. John Wiley &
uchi. Genetic algorithm involving coevolution mech-
anism to search for effective genetic information. In
Proceedings of the 4th Conference on Evolutionary
Computation – ICEC97, pages 709–714. IEEE Soci-
ety Press, 1997.
 Proceedings of the 1st IEEE Conference on Evolu-
tionary Computation. IEEE Computer Society Press,
 D. Ach/ liop/ tas, L.M. Kir/ ou/ sis, E. Kra/ na/ kis,
D. Kri/-zanc, M.S. Mol/-loy, and Y.C. Sta/ ma/ tiou.
Random constraint satisfaction a more accurate pic-
ture. In G. Smolka, editor, Principles and Practice
of Constraint Programming – CP97, pages 107–120.
Springer Verlag, 1997.
 E. MacIntyre, P. Prosser, B.M. Smith, and T. Walsh.
Randomconstraintsatisfaction: theorymeets practice.
In M. Maher and J.-F. Puget, editors, Principles and
Practice of Constraint Programming – CP98, pages
325–339. Springer Verlag, 1998.
 E. Marchiori. Combining constraint processing and
genetic algorithms for constraint satisfaction prob-
lems. In Th. B¨ ack, editor, Proceedings of the 7th In-
ternational Conference on Genetic Algorithms, pages
330–337, San Francisco, CA, 1997. Morgan Kauf-
mann Publishers, Inc.
 E. Marchioriand A. Steenbeek. A genetic local search
algorithm for random binary constraint satisfaction
problem. In Proceedings of the 14th Annual Sympo-
sium on Applied Computing, pages 463–469, 2000.
 Z. Michalewicz. Genetic Algorithms + Data Struc-
tures = Evolutionary Programs.
Berlin, 3rd edition, 1996.
 E.M. Palmer. Graphical Evolution. An introduction
to the theory of random graphs. Wiley-Interscience
Series in Discrete Mathematics. John Wiley & Sons,
Ltd., Chichester, 1985.
 J. Paredis. Coevolutionary constraint satisfaction. In
Y. Davidor, H.-P. Schwefel, and R. M¨ anner, editors,
Proceedings of the 3rd Conference on Parallel Prob-
lem Solving from Nature – PPSN94, volume 886 of
Lecture Notes in Computer Science, pages 46–55.
Springer Verlag, 1994.
 M.-C. Riff Rojas. Using the knowledge of the con-
straint network to design an evolutionary algorithm
that solves csp. In Proceedings of the 3rd IEEE
Conference on Evolutionary Computation – ICEC96,
pages 279–284. IEEE Computer Society Press, 1996.
 F. Rossi, C. Petrie, and V. Dhar. On the equivalenceof
constrain satisfaction problems. In L.C. Aiello, editor,
Proceedings of the 9th European Conference on Arti-
ficial Intelligence (ECAI’90), pages 550–556, Stock-
holm, 1990. Pitman.
 H.-P.Schwefel. EvolutionandOptimumSeeking. John
Wiley & Sons, New York, NY, 1995.
 B.M. Smith. Phase transition and the mushy region in
constraint satisfaction problems. In A.G. Cohn, edi-
tor, Proceedings of the 11th European Conference on
Artificial Intelligence, pages 100–104. Wiley, 1994.
 E. Tsang.
Academic Press, 1993.
Foundations of Constraint Satisfaction.