ArticlePDF Available

Using Soft CSPs for Approximating Pareto-Optimal Solution Sets


Abstract and Figures

We consider constraint satisfaction problems where solutions must be optimized according to multiple criteria.
Content may be subject to copyright.
Using Soft CSPs for Approximating Pareto-Optimal Solution Sets
Marc Torrens and Boi Faltings
Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland,
We consider constraint satisfaction problems where so-
lutions must be optimized according to multiple crite-
ria. When the relative importance of different criteria
cannot be quantified, there is no single optimal solution,
buta possibly very large set of Pareto-optimal solutions.
Computing this set completely is in general very costly
and often infeasible in practical applications.
We consider several methods that apply algorithms for
soft CSP to this problem. We report on experiments,
both on random and real problems, that show that such
algorithms can compute surprisingly good approxima-
tions of the Pareto-optimal set. We also derive variants
that further improve the performance.
Constraint Satisfaction Problems (CSPs) (Tsang 1993; Ku-
mar 1992) are ubiquitous in applications like configuration,
planning, resource allocation, scheduling, timetabling and
many others. A CSP is specified by a set of variables and a
set of constraints among them. A solution to a CSP is a set
of value assignments to all variables such that all constraints
are satisfied.
In many applications of constraint satisfaction, the objec-
tive is not only to find a solution satisfying the constraints,
but also to optimize one or more preference criteria. Such
problems occur in resource allocation, scheduling and con-
figuration. As an example, we consider in particular elec-
tronic catalogs with configuration functionalities:
a hard constraint satisfaction problem defines the avail-
able product configurations, for example different fea-
tures of a PC.
the customer has different preference criteria that need to
be optimized, for example price, certain functions, speed,
More precisely, we assume that optimization criteria are
modeled as functions that map each solution into a numer-
ical value that indicates to what extent the criterion is vio-
lated; i.e. the lower the value, the better the solution.
2002, American Association for Artificial Intelli-
gence ( All rights reserved.
We call such problems Multi-criteria Constraint Opti-
mization Problems (MCOP), which we define formally as
Definition 1. A Multi-criteria Constraint Optimization
Problem (MCOP) is defined by a tuple
is a finite set of variables,
each associated with a domain of discrete values
, and a set of constraints .
is defined by a function on some subset of
variables. This subset is called the scope of the constraint.
A constraint over the variables
is a function
that defines whether the tuple is al-
lowed (0) or disallowed (
) in case of a hard constraint, or
the degree to which it is preferred in the case of a preference
criterion, with 0 being the most preferred and MAX-SOFT
What the best solution to a MCOP is depends strongly
on the relative importance of different criteria. This may
vary depending on the customer, the time, or the precise val-
ues that the criteria take. For example, in travel planning for
some people price may be more important than the schedule,
while for others it is just the other way around. People find it
very difficult to characterize the relative importance of their
preferences by numerical weights. Some research has be-
gun to address this problem by inferring constraint weight-
ing from the way people choose solutions (Biso, Rossi, &
Sperduti 2000).
When relative importance of criteria is unknown, it is not
possible to identify a single best solution, but at least certain
solutions can be classified as certainly not optimal. This is
the case when there is another solution which is as good as
or better in all respects. We say that a solution
another solution
if for every constraint , the violation
cost in
is no greater than that in , and if for at least
one constraint,
has a lower cost than . This is defined
formally as follows:
Definition 2. Given a MCOP
with constraints
MAX-SOFT is a maximum value for soft constraints. By using
a specific maximum valuation for soft constraints, we can easily
differenciate between a hard violation and soft violation.
2 (dominated by 1)
5 (dominated by 3 and 4)
Figure 1: Example of solutions in a CSP with two preference cri-
teria. The two coordinates show the values indicating the degrees
to which criteria
(horizontal) and (vertical) are violated.
and two solutions and of :
The idea of Pareto-optimality (Pareto 1896 1987) is to
consider all solutions which are not dominated by another
one as potentially optimal:
Definition 3. Any solution which is not dominated by an-
other is called Pareto-optimal.
Definition 4. Given a MCOP
, the Pareto-optimal set
the set of solutions which are not dominated by any other
In Figure 1, the Pareto-optimal set is
, as solu-
tion 7 is dominated by 4 and 6, 5 is dominated by 3 and 4,
and 2 is dominated by 1.
Pareto-optimal solutions are hard to compute because un-
less preference criteria involve only few of the variables, the
dominance relation can not be evaluated on partial solutions.
Research on better algorithms for Pareto-optimality is still
ongoing (see, for example, Gavanelli (Gavanelli 2002)), but
since it cannot escape this fundamental limitation, generat-
ing all Pareto-optimal solutions is likely to always remain
computationally very hard.
Therefore, Pareto-optimality has so far found little use in
practice, despite the fact that it characterizes optimality in a
more realistic way. This is especially true when the Pareto-
optimal set must be computed very quickly, for example in
interactive configuration applications (e.g. electronic cata-
Another characteristic of the Pareto-optimal set is that it
usually contains many solutions; in fact, all solutions could
be Pareto-optimal. Thus, it will be necessary to have the end
also called the efficient frontier of
user, or another program that has the information about rela-
tive importance of constraints, pick the best solution among
the set that has been returned.
Soft CSPs
Given the intractability of computing all Pareto-optimal so-
lutions, the predominant approach in constraint satisfaction
has been to map multiple criteria into a single one and then
compute a single solution that is optimal according to this
criterion. The key question in this case is how to com-
bine the preference orders of the individual constraints into
a global preference order for picking the optimal solution.
For the scenario in Figure 1, some commonly used soft CSP
algorithms would give the following results:
1. in MAX-CSP (Freuder & Wallace 1992), we sum the val-
ues returnedby each criterion, possibly with a weight, and
pick the solution with the lowest sum as the optimal solu-
tion. In Figure 1, if we assume that both constraints carry
equal weight, this is solution
2. in fuzzy CSP (Fargier, Lang, & Schiex 1993), we char-
acterize each solution by the worst criteria violation, i.e.
by the maximum value returned by any criterion, and pick
the solution with the lowest result as the optimum. In Fig-
ure 1, this is solution
3. in hierarchical CSP (Borning, Freeman-Benson, & Wil-
son 1992), criteria have a weight expressing their degree
of importance and we order solutions according to the
lowest weight of all the constraints that are somehow vio-
lated. In Figure 1, if we assume that criterion
is more
important than criterion
, the optimum is solution . If
we consider
as more important than , then the opti-
mum is solution
MAX-CSP can be solved efficiently by branch and bound
techniques. Fuzzy and hierarchicalCSPs can be solvedmore
efficiently by incrementally relaxing constraints in increas-
ing order of importance until solutions are found. More re-
cently, it was observed that most soft CSP methods can be
seen as instances of the more general class of soft constraint
using c-semirings introduced by (Bistarelli et al. 1999). De-
pending on the way that the semiring operators are instan-
tiated, one obtains the different soft constraint formalisms.
Semiring-based CSPs are developed in detail in (Bistarelli
All these methods make the crucial assumption that vio-
lation costs for different criteria are comparable or can be
made comparable by known weighting metrics. This as-
sumption is also crucial because it ensures that there is a
single optimal solution which can be returned by the algo-
rithm. The weights have a crucial influence on the solution
that will be returned: in Figure 1, depending on the relative
weights, solutions 1, 4 or 6 could be the optimal ones.
Soft CSP with Multiple Solutions
Interestingly, most decision aids already return not the sin-
gle optimal solution, but an ordered list of the top-ranked
solutions. Thus, web search engines return hundreds of
documents deemed to be the best matches to a query, and
electronic catalogs return a list of possibilities that fit the
criteria in decreasing degree. In general, these solutions
have been calculated assuming a certain weight distribution
among constraints. It appears that listing a multitude of
nearly optimal solutions is intended to compensate for the
fact that these weights, and thus the optimality criterion, are
usually not accurate for the particular user.
For example, in Figure 1, if we assume that constraints
haveequal weight, the order of solutions would be
, and the top four according to this
weighting are also the Pareto-optimal ones.
The questions we address in this paper follow from this
how closely do the top-ranked solutions generated by
a scheme with known constraint weights, in particular
MAX-CSP, approximate the Pareto-optimal set, and
can we derive variations that cover this set better while
maintaining efficiency?
We have performed experiments in the domain of config-
uration problems that indicate that MAX-CSP can indeed
provide a surprisingly close approximation of the Pareto-
optimal set both in real settings and in randomly gener-
ated problems, and derive improvements to the methods that
could be applied in general settings.
Using Soft CSP Algorithms for Approximating
Pareto-optimal Sets
To approximate the set of Pareto-optimal solutions, the sim-
plest solution is to simply map the MCOP into an opti-
mization problem with a single criterion obtained by a fixed
weighting of the different criteria, called a weighted con-
strained optimization problem (WCOP):
Definition 5. A WCOP is an MCOP with an associated
weight vector
, . The optimal
to a COP is a tuple that minimizes the valuation
The best solutions to a WCOP is the set of the solutions
with the lowest cost.
is called the valuation of . We
call feasible solutions to a WCOP those solutions which do
not violate any hard constraint.
Note that when the weight vector consists of all 1s,
WCOP is equivalent to MAX-CSP and is also an instanti-
ation of the semiring CSP framework. WCOP can be solved
by branch-and-bound search algorithms. These algorithms
can be easily adapted to returnnot only the best solution, but
an ordered list of the
best solutions. In our work we use
Partial Forward Checking (Freuder & Wallace 1992) (PFC),
which is a branch and bound algorithm with propagation.
Pareto-optimality of WCOP Solutions
As mentioned before, in practice it turns out that among
best solutions to a WCOP, many are also Pareto-
optimal. Theorem 1 shows indeed that the optimal solution
of a WCOP is always Pareto-optimal, and that furthermore
among the
best solutions all those which arenotdominated
by another one are Pareto-optimal for the whole problem:
Theorem1. Let be the set of the best solutions ob-
tained by optimizing with weights
. If and is not dominated by any
then is Pareto-optimal.
Proof. Assume that
is not Pareto-optimal. Then, there is
a solution
which dominates solution , and by
Definition 2:
As a consequence, we also have:
i.e. must be better than according to the weighted
optimization function. But this contradicts the fact that
This justifies the use of soft CSP to find not just one, but
a larger set of Pareto-optimal solutions. In particular, by
filtering the
best solutions returned by a WCOP algorithm
to eliminate the ones which are dominated by another one
in the set, we find only solutions which are Pareto-optimal
for the entire problem. We can thus bypass the costly step of
proving non-dominance on the entire solution set.
The first algorithm is thus to find a subset of the Pareto
set by modeling it as a WCOP with a single weight vector,
generating the
best solutions, and filtering them to retain
only those which are not dominated (Algorithm 1).
Algorithm 1: Method for approximating the Pareto-
optimal set of a MCOP using a single WCOP using PFC
(Partial Forward Checking).
Input: : MCOP.
: the maximal number of solutions to com-
: an approximation of the Pareto-optimal
PFC (WCOP (P, ), )
eliminateDominatedSolutions ( )
The Weighted-sums Method
The above method has the weakness that it generates solu-
tions that are optimal with respect to a certain weight vector
and thus likely to be very similar to one another. The iter-
ated weighted-sums approach, as described for example by
Steuer in (Steuer 1986), attempts to overcome this weak-
ness by calling a WCOP method several times with different
weight vectors. Each WCOP will give us a different sub-
set of Pareto-optimal solutions, and a good distribution of
weight vectors should give us a good approximation of the
Pareto-optimal set.
Algorithm 2: Weighted-sums method for approximating
the Pareto-optimal set of a MCOP. The
weight vectors
are generated to give an adequate distribution of solu-
Input: : MCOP.
: the maximal number of solutions to com-
: is a collection of
weight vectors.
: an approximation of the Pareto-optimal
PFC (WCOP (P, ), )
PFC (WCOP (P, ), )
eliminateDominatedSolutions ( )
Basically, the proposed main method (Algorithm 2) con-
sists of performing several runs over WCOPs with differ-
ent weight vectors for the soft constraints and one run over
the vector
. The method has two parameters: ,
which is the maximal number of solutions to be found, and
which is the collection of weight
vectors. Algorithm 2 performs
iterations, one for
each different WCOP with weight vector
, and one for a
WCOP with the weight vector
. At each iteration,
it computes the best
solutions to the correspond-
ing WCOP. At the end of each iteration, dominated solutions
are filtered out, so by Theorem 1, the resulting set of solu-
tions are Pareto-optimal.
Empirical Results on Random Problems
We have tested several different instances and variances of
the described methods on randomly generated problems:
Method 1: consits of one search using Algorithm 1.
Method 2: Algorithm 2 with iterations using
randomly generated weight vectors. In our experiments
varied from 3 to 19 in steps of 2.
Method 3: Algorithm 2 with
iterations, one for each
constraint. The iteration for the constraint
, is per-
formed with the weightvector
, where
and .
Method 4: Algorithm 2 with
iterations, one for
each cuple of constraints. The iteration for the constraints
and , is performed with the weight vector
, where , and
Method 5: Algorithm 2 with
iterations, one
for each constraint and one for each cuple of constraints.
This method is a mixed method between method 3 and 4.
It takes the vectors from both methods.
Method 6: Using the Lexicographic Fuzzy CSP
(Dubois, Fargier, & Prade 1996) approach for obtaining
The advantage of working with Method 1 is that it uses
standard well-known algorithms. The disadvantage is that
it tends to find solutions which are in the same area of the
solution space. To increase the diversity of the resulting set
of Pareto-optimal solutions, we propose to perform several
iterations with random weight vectors (Method 2). The intu-
ition behind method 3 is to remove the effect of one criteria
at each iteration in order to avoid strong dominance of some
of the constraints. The same idea is behind methods 4 and 5.
The motivation for method 6 is to compare how well other
instantiations of the semiring CSP framework apply to this
problem. Lexicographic Fuzzy CSPs are the most power-
ful version of Fuzzy CSP methods which are interesting be-
cause they admit more efficient search algorithms (Bistarelli
et al. 1999).
Random Soft Constraint Satisfaction Problems
The topology of a random soft CSP is defined by:
, where is the numberof vari-
ables in the problem and
the size of their domains. is
the graph density in percentage for unary and binary hard
is the tightness in percentage for disallowed
tuples in unary and binary hard constraints.
and are
the graph density and tightness in percentage for unary and
binary soft constraints. Valuations for soft constraints can
take values from 0 to
. For simplicity, hard and soft
constraints are separated and we are not considering mixed
constraints, therefore
. For building random
instances of soft CSPs, wechoose the variablesforeach con-
straint following a uniform probabilistic distribution. In the
same way, we choose the tuples in constraints. Valuations
for soft tuples are randomly generated between
and valuationsfor hardtuples are represented by a maximum
valuation (
The algorithmshavebeen tested with differentset of prob-
lems of soft CSPs with 5 or 10 variables and 10 values for
each variable. Hard unary/binary constraint density
been varied from 20% to 80% in steps of 20, and the tight-
ness for hard constraints
varies also from 20% to 80% in
steps of 20. Soft unary/binary constraint density
has been
varied from 20% to 80% in steps of 10, with tightness fixed
. In the case of 5 variables, in total there could
soft constraints (5 unary constraints
and 10 binary constraints). In the case of 10 variables, in
total there could be
soft constraints
(10 unary constraints and 45 binary soft constraints).
For every different class of problems, 50 different in-
stances were generated, and each instance has been tested
with the different proposed methods. The methods have
been tested varying the number of total solutions to be com-
puted from 30 to 530 in steps of 50, from 530 to
steps of 100, from
to in steps of and
to in steps of .
For the differentproblem topologies the average of the re-
sults for each instance are evaluated in the followingsection.
Firstly, we are interested in knowing how many Pareto-
optimal solutions there are in a problem depending on the
3 4 5 6 7 8 9 10 11 12
number of pareto optimal solutions
number of criteria (soft constraints)
Random problem with n=5, d=10, hc=20%
hard tightness = 20%
hard tightness = 40%
hard tightness = 60%
hard tightness = 80%
Figure 2: Number of Pareto-optimal solutions depending on
howmanysoft constraintswe considerfor random generated
problems with 5 variables, 10 values per domain and 20%
of hard unary/binary constraint density. The number of of
solutions in average for the generated problems are:
for hard tightness = 20%, for hard tightness = 40%,
for hard tightness = 60% and 778 for hard tightness =
number of criteria (soft constraints). In Figure 2, it is shown
that the number of Pareto-optimalsolutions clearly increases
when the number of criteria increases. The same phenom-
ena applies for instances with 5 and 10 variables. On the
other hand, we have observed that even if the number of
Pareto-optimal solutions decreases when the problem gets
more constrained (less feasible solutions) the percentage in
respect to the total number of solutions increases. Thus, the
proportion of the Pareto-optimal solutions is more important
when the problem gets more constrained.
We have evaluated the proposed methods for each type
of generated problems. Figure 3 shows the proportion in
avarage of Pareto-optimal solutions found by the different
methods for problems with 6 soft constrains. We emphasize
the results of the methods up to 530 solutions because in
real applications it could not be feasible to compute a larger
set of solutions. When computing up to
the behavior of the different methods does not change sig-
nificantly. The 50 randomly generated problems used for
Figure 3 had in avarage
feasible solutions (satisfy-
ing hard constraints) and
Pareto-optimal solutions. The
iterative methods perform better than the single search al-
gorithm (Method 1) in respect to the total number of solu-
tions computed. It is worth to note that the iterative meth-
ods based on Algorithm 2 find more Pareto-optimal solu-
tions when the number of iterations increase. Lexicographic
Fuzzy method (Method 6) results in finding a very low per-
centage of Pareto-optimal solutions (less than
). With
Method 6, Theorem 1 does not apply, thus the percentage
shown of Pareto-optimal solutions is computed a posteriori
by filtering out the Pareto-optimal solutions that were not
really Pareto-optimal for the entire problem.
Another way of comparing the different methods is to
30 130 230 330 430 530
% of Pareto optimal solutions found
Total number of computed solutions
Evaluation of different methods
1 iteration
3 iterations (2 random)
5 iterations (4 random)
7 iterations (6 random)
11 iterations (10 random)
15 iterations (14 random)
19 iterations (18 random)
6 iterations, 1 criteria left out
16 iterations, 2 criteria left out
22 iterations, mixed method
Lexicographic Fuzzy
Figure 3: Pareto-optimal solutions found by the different
proposed methods (in %). Methods are applied to 50 ran-
domly generated problems with 10 variables, 10 values per
domain, 40% of density of hard unary/binary constraints
with 40% of hard tightness and 6 criteria (soft constraints).
The number of total computed solutions for each method
varies from 30 to 530 in steps of 100.
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
% of Pareto optimal solutions found
Time in seconds
Evaluation of different methods
1 iteration
3 iterations (2 random)
5 iterations (4 random)
7 iterations (6 random)
11 iterations (10 random)
15 iterations (14 random)
19 iterations (18 random)
6 iterations, 1 criteria left out
16 iterations, 2 criteria left out
22 iterations, mixed method
Figure 4: Number of Pareto-optimal solutions found by the
different proposed methods with respect to the computing
time. For this plot, the problemshave 10 variables, 10 values
per domain with 40% of hard unary/binary constraints with
40% of hard tightness and 6 criteria (soft constraints).
compare the number of Pareto-optimal solutions found with
respect to the computing time (Figure 4). Using this com-
parison, Method 1 performs the best. The performance of
the variants of Method 2 decreases when the number of it-
erations increases. Method 3 performs better than method 4
which performs better than method 5 in terms of computing
In general, we can observe that when the number of it-
erations of the methods increases the performance regarding
the total numberof computedsolutionsalso increases butthe
performance regarding the computing time decreases. This
is due to the fact that the computingtime of findingthe
solutions with PFC is not linear with respect of finding the
best solutions with iterations ( solutions per itera-
tion). For example, computing
solutions with one it-
eration takes
seconds and computing solutions
with 7 iterations (of
solutions) takes seconds.
Even if the tests based on Algorithm 2 takes more time
than Algorithm 1 for getting the same percentage of Pareto-
optimal solutions, they are likely to produce a more repre-
sentative set of the Pareto-optimal set.
Using a brute force algorithm that computes all the feasi-
ble solutionsand filter out those which are dominatedtook in
seconds for the same problemsas in the above
figures. This demonstrates the interest of using approxima-
tive methods for computing Pareto-optimal solutions, espe-
cially for interactive configuration applications (e.g. elec-
tronic catalogs).
Empirical Results in a Real Application
The Air Travel Configurator
The problem of arranging trips is here modeled as a soft
CSP (see (Torrens, Faltings, & Pu 2002) for a detailed de-
scription of our travel configurator). An itinerary is a set of
legs, where each leg is represented by a set of origins, a set
of destinations, and a set of possible dates. Leg variables
represent the legs of the itinerary and their domains are the
possible flights for the associated leg. Another variable rep-
resents the set of possible fares
applicable to the itinerary.
The same itinerary can have several different fares depend-
ing on the cabin class, airline, schedule and so on. Usu-
ally, for each leg there can be about 60 flights, and for each
itinerary, there can be about 40 fares. Therefore, the size of
the search space for a round trip is
for a three leg trip is
. Constraint sat-
isfaction techniques are well-suited for modeling the travel
planning problem. In our model, two types of configuration
constraints (hard constraints) guarantee that:
1. a flight
for a leg arrives before a flight for a leg
takes off, and
2. a fare can be really applicable to a set of flights (depend-
ing on the fare rules).
Users normally have preferences about the itinerary they
are willing to plan. They can have preferences about the
schedule of the itinerary, the airlines, the class of service,
In travel industry, the fare applicable to an itinerary is not the
sum over the fares for each flight.
0 10 20 30 40 50 60 70
% of 50 Pareto-optimal Solutions
Number of Solutions
Pareto-optimal Solutions for the travel configurator
50 Pareto-optimal Solutions
Figure 5: How many solutions we need for getting a cer-
tain number of Pareto-optimal solutions ? This example is
based on a round trip and showsthat, for instance, 50Pareto-
optimal solutions can be found out of about less than 70 so-
and so on. Such preferences are modeled as soft constraints.
Thus, this problem can be naturally modeled as a MCOP
with hard constraints for ensuring the feasibility of the so-
lution and soft constraints for taking into account the users
Tests on the Travel Configurator Application
Method 1 has been tested with our travel configurator. We
have generated 68 instances of itineraries: 58 round trips, 5
3-leg trips, 1 6-leg trip, 3 5-leg trips and 1 7-leg trip. These
instances were tested with 5 unary soft constraints simulat-
ing user’s preferences. For this application, the goal is to
find a set of Pareto-optimal solutions to be shown to the
user. Thus, the problem is not to find all Pareto-optimal so-
lutions but a relatively small set of Pareto-optimal solutions.
In order to achieve this, we have applied branch and bound
algorithm with propagation (PFC) to discover how many
solutions we need to obtain a certain number of Pareto-
optimal solutions. Precisely, we study how many solutions
are needed to find 50 Pareto-optimal solutions.
Evaluation on the Travel Configurator
Figure 5 shows the test results for a round trip (3 variables,
with domain sizes 40, 60 and 60) with 5 unary soft con-
straints (expressing users’ preferences). We observe that
for getting a certain number of Pareto-optimal solutions in
this kind of problems, the number of solutions to compute
is very reasonable. Indeed, the method is shown very us-
able for interactive configuration applications, and specifi-
cally for electronic catalogs.
The plot shown in Figure 5 has been generated for all 68
instances of the problems previously described. For all the
examples we get similar results. By computing 90 solutions
to these problems, we always get 50 Pareto-optimal solu-
tions for all the examples tried.
In electronic catalogs and similar applications, it is useful
to find a certain number of Pareto-optimal solutions, even
if this set only represents a small fraction of all the Pareto-
optimal solutions. Actually, we consider that the number of
total solutions that can be shown to the user must be small
because of the limitations of the current graphical user inter-
Related Work
The most commonly used approach for solving a Multi-
criteria Optimization Problem is to convert a MCOP into
several COPs which can be solved using standard mono-
criteria optimization techniques. Each COP will give
then a Pareto-optimal solution to the problem. Steuer’s
book (Steuer 1986) gives a deep study on different ways
to translate a MCOP to a set of COPs. The most used
strategy is to optimize by one linear function of all crite-
ria with positive weights. The drawback of the method is
that some Pareto-optimal solutions cannot be found if the
efficient frontier is not concave
. Our methods are based on
this approach.
Gavanelli (Gavanelli 2002; 2001) addresses the problem
of multi-critera optimization in constraint problems directly.
His method is based in a branch and bound schema where
the Paerto dominance is checked against a set of previously
found solutions using Point Quad-Trees. Point Quad-Trees
are useful for efficiently bounding the search. However, the
algorithm can be very costly if the number of criteria or if
the numberof Pareto-optimal solutions are high. Gavanelli’s
method significantly improvesthe approach of Wassenhove-
Geders (Wassenhove & Gelders 1980). The Wassenhove-
Geder’s method basically consists of performing several
search processes, one for each criteria. Each iteration takes
the previous solution and tries to improve it by optimizing
another criteria. Using this method, each search produces
one Pareto-optimal solution, so a lot of search process must
be done in order to approximate the Pareto-optimal set.
The Global Criterion Method tries to solve a MCOP as a
COP where the criteria to optimize is a minimization func-
tion of a distance function to an ideal solution. The ideal
solution is precomputed by optimizing each criteria inde-
pendently (Salukvadze 1974).
Incomplete methods have also been developed for
solving multi-criteria optimization, basically: genetic
algorithms (Deb 1999) and methods based on tabu
search (Hansen 1997).
This paper deals with a very well-studied topic, Pareto-
optimality in multi-criteria optimization. It has been com-
monly understood that Pareto-optimality is intractable to
compute, and therefore has not been studied further. In-
stead, many applications have simply mapped multi-criteria
search into a single criterion with a particular weighting and
returned a list of the
best solutions rather than a single
best one. This solution allows leveraging the well-developed
in the case that the optimization function is a minimization
function, convex if the optimization function is a maximization
framework of soft CSPs to Multi-criteriaOptimization Prob-
Our contribution is to have shown empirically that this
procedure, if combined with a filtering that eliminates dom-
inated solutions from the results of the optimization proce-
dure, results indeed a surprisingly good approximations of
the Pareto-optimal set. Based on this observation, we have
shown that the performancecan be improved at a small price
in cost by running the same procedure with different random
We have implemented this method with great success in
a commercial travel planning tool, and believe that it would
apply well to many other applications as well.
Biso, A.; Rossi, F.; and Sperduti, A. 2000. Experimen-
tal results on Learning Soft Constraints. In 7
tional Conference on Principles of Knowledge Representa-
tion and Reasoning.
Bistarelli, S.; Fargier, H.; Montanari, U.; Rossi, F.; Schiex,
T.; and Verfaillie, G. 1999. Semiring-based CSPs and
Valued CSPs: Basic Properties and Comparison. CON-
STRAINTS: an international journal 4(3).
Bistarelli, S. 2001. Soft Constraint Solving and Program-
ming: a general framework. Ph.D. Dissertation, Universit`a
degli Studi di Pisa.
Borning, A.; Freeman-Benson, B.; and Wilson, M. 1992.
Constraint Hierarchies. Lisp and Symbolic Computation:
An International Journal 5(3):223–270.
Deb, K. 1999. Multi-objective genetic algorithms: Prob-
lem difficulties and construction of test problems. Evolu-
tionary Computation 7(3):205–230.
Dubois, D.; Fargier, H.; and Prade, H. 1996. Possibil-
ity Theory in Constraint Satisfaction Problems: Handling
priority, preference and uncertainty. Applied Intelligence
Fargier, H.; Lang, J.; and Schiex, T. 1993. Selecting Pre-
ferredSolutions in Fuzzy Constraint Satisfaction Problems.
In Proceedings of the First European Congres on Fuzzy
and Intelligent Technologies.
Freuder, E. C., and Wallace, R. J. 1992. Partial constraint
satisfaction. Artificial Intelligence 58(1):21–70.
Gavanelli, M. 2001. Partially ordered constraint optimiza-
tion problems. In Walsh, T., ed., Principles and Practice
of Constraint Programming, 7
International Conference
- CP 2001, volume 2239 of Lecture Notes in Computer Sci-
ence, 763. Paphos, Cyprus: Springer Verlag.
Gavanelli, M. 2002. An implementation of Pareto optimal-
ity in CLP(FD). In Jussien, N., and Laburthe, F., eds., CP-
AI-OR - International Workshop on Integration of AI and
OR techniques in Constraint Programming for Combina-
torial Optimisation Problems, 49–64. Le Croisic, France:
Ecole des Mines de Nantes.
Hansen, M. P. 1997. Tabu Search in Multiobjective Opti-
misation : MOTS. In Proceedings of MCDM’97.
Kumar, V. 1992. Algorithms for Constraint Satisfaction
Problems: A Survey. AI Magazine 13(1):32–44.
Pareto,V. 1896-1987. Coursd’
economie politique profess
a l’universit
e de Lausanne, volume 1. Lausanne: F. Rouge.
Salukvadze, M. E. 1974. On the existence of solu-
tion in problems of optimization under vector valued cri-
teria. Journal of Optimization Theory and Applications
Steuer, R. E. 1986. Multi Criteria Optimization: Theory,
Computation, and Application. New York: Wiley.
Torrens, M.; Faltings, B.; and Pu, P. 2002. Smartclients:
Constraint satisfaction as a paradigm for scaleable intel-
ligent information systems. CONSTRAINTS: an interna-
tional journal 7:49–69.
Tsang, E. 1993. Foundations of Constraint Satisfaction.
London, UK: Academic Press.
Wassenhove, L. N. V., and Gelders, L. F. 1980. Solving a
bicriterion scheduling problem. European Journal of Op-
erational Research 4(1):42–48.
... Unfortunately, computing the Pareto set is not trivial, although there exists algorithms for approximation (Torrens, Faltings 2002): ...
... The multi-objective fitness function will compare the solution tested with the stored solutions in order to obtain the Pareto-Optimality Frontier (POF) [7]. In this approach we have used NSGA-II [3], but there are other approaches that could be used here, such as SPEA2 [8] or PESA-II [2]. ...
Conference Paper
Mission Planning Problem for a large number of Unmanned Air Vehicles (UAV) consists of a set of locations to visit in different time windows, and the actions that the vehicle can perform based on its features such as the sensors, speed or fuel capacity. After formulating this problem as a Constraint Satisfaction Problem (CSP), we try to search the set of Non dominated solutions which minimize the fuel consumption and the makespan of the mission. To solve it, we will use a Multi-Objective Genetic Algorithm (MOGA), that will match the model constraints and use a multi-objective function in order to optimize these objective variables.
... -The total fuel consumed, i.e the sum of the fuel consumed by each UAV at performing the tasks of the mission. Our model uses weights to map these three objectives into a single cost function, as the similar approach WCOP [16]. This function is computed as the sum of percentage values of these three objectives, as shown in Equation 1. ...
Conference Paper
Full-text available
Mission Planning is a classical problem that has been traditionally studied in several cases from Robotics to Space missions. This kind of problems can be extremely difficult in real and dynamic scenarios. This paper provides a first analysis for mission planning to Unmanned Air Vehicles (UAVs), where sensors and other equipment of UAVs to perform a task are modelled based on Temporal Constraint Satisfaction Problems (TCSPs). In this model, a set of resources and temporal constraints are designed to represent the main characteristics (task time, fuel consumption, ...) of this kind of aircrafts. Using this simplified TCSP model, and a Branch and Bound (B&B) search algorithm, a set of feasible solutions will be found trying to minimize the fuel cost, flight time spent and the number of UAVs used in the mission. Finally, some experiments will be carried out to validate both the quality of the solutions found and the spent runtime to found them.
... Particular orderings are considered including lexicographic and Pareto orderings. [6, 7] also deal with multi-criteria optimisation especially for the Pareto ordering; [13] consider a soft constraint approach to Pareto optimisation. ...
Full-text available
The handling of partially ordered degrees of preference can be important for constraint-based languages, especially when there is more than one criterion that we are interested in optimising. This paper describes branch and bound algorithms for a very general class of soft constraint optimisation problems. At each node of the search tree, a propagation mechanism is applied which generates an upper bound (since we are maximising) of the preference degrees of all complete assignments below the node. We show how this prop-agation can be achieved using an extended mini-buckets algorithm. However, since the degrees of preference ordering are only partially ordered, such an upper bound can be very uninformative, and so it can be desirable to instead generate an upper bound set, which con-tains an upper bound for the degree of preference for each complete assignment below the node. It is shown how such propagation can also be achieved using this extended mini-buckets approach.
... A detailed survey of this vast literature is clearly beyond our scope; here we provide pointers to work that exhibits significant overlap. First, the idea of extending CSPs to solve multi-criteria optimization problems is proposed in [Torrens and Faltings , 2002]; this work also uses Pareto-optimality as a criterion for ordering solutions. Second , the idea of applying the notion of degrees of satisfaction to solving temporal reasoning problems has been applied previously [Dubois and Prade , 1989]. ...
Conference Paper
Full-text available
This paper focuses on temporal constraint prob- lems where the objective is to optimize a set of lo- cal preferences for when events occur. In previous work, a subclass of these problems has been for- malized as a generalization of Temporal CSPs, and a tractable strategy for optimization has been pro- posed, where global optimality is defined as maxi- mizing the minimum of the component preference values. This criterion for optimality, which we call "Weakest Link Optimization" (WLO), is known to have limited practical usefulness because solutions are compared only on the basis of their worst value; thus, there is no requirement to improve the other values. To address this limitation, we introduce a new algorithm that re-applies WLO iteratively in a way that leads to improvement of all the values. We show the value of this strategy by proving that, with suitable preference functions, the resulting so- lutions are Pareto Optimal.
... The system could propose solutions in the Pareto set as candidates, based on the user reaction (acceptance/critiques), we can then estimate the relative importance of constraints, and propose another solution belonging to the Pareto set until a fairly good estimation of the user model is acquired. Unfortunately, computing the Pareto set is not trivial, although there exists algorithms for approximation (Torrens, Faltings 2002): 1. First, solve the associated MinCSP with a chosen weight vector and save the best K solution in vector V 2. ...
Conference Paper
Full-text available
Personal agents represent a novel paradigm that merges ideas from the agent based computing and the intelligent information system areas. A personal agent gathers and filters information on user's behalf, models user's needs and preferences in order to generate recommendations. To build an efficient user model, both explicitly stated and hidden (inferred from context or other users) preferences have to be considered because people are not good at describing their own decision criteria; moreover preferences can change over time. In this paper different techniques for modeling and obtaining preferences are presented, with special emphasis on systems that interact with the user in form of dialogue, in a way that makes it possible to elicit preferences just in time. Several questions that require further investigations are raised.
Conference Paper
The Pareto dominance relation compares decisions with each other over multiple aspects, and any decision that is not dominated by another is called Pareto optimal, which is a desirable property in decision making. However, the Pareto dominance relation is not very discerning, and often leads to a large number of non-dominated or Pareto optimal decisions. By strengthening the relation, we can narrow down this nondominated set of decisions to a smaller set, e.g., for presenting a smaller number of more interesting decisions to a decision maker. In this paper, we look at a particular strengthening of the Pareto dominance called Sorted-Pareto dominance, giving some properties that characterise the relation, and giving a semantics in the context of decision making under uncertainty. We then examine the use of the relation in a Soft Constraints setting, and explore some algorithms for generating Sorted-Pareto optimal solutions to Soft Constraints problems.
Conference Paper
Pareto dominance is often used in decision making to compare decisions that have multiple preference values --- however it can produce an unmanageably large number of Pareto optimal decisions. When preference value scales can be made commensurate, then the Sorted-Pareto relation produces a smaller, more manageable set of decisions that are still Pareto optimal. Sorted-Pareto relies only on qualitative or ordinal preference information, which can be easier to obtain than quantitative information. This leads to a partial order on the decisions, and in such partially-ordered settings, there can be many different natural notions of optimality. In this paper, we look at these natural notions of optimality, applied to the Sorted-Pareto and min-sum of weights case; the Sorted-Pareto ordering has a semantics in decision making under uncertainty, being consistent with any possible order-preserving function that maps an ordinal scale to a numerical one. We show that these optimality classes and the relationships between them provide a meaningful way to categorise optimal decisions for presenting to a decision maker.
Configuration problems often involve large product catalogs, and the given user requests can be met by many different kinds of parts from this catalog. Hence, configuration problems are often weakly constrained and have many solutions. However, many of those solutions may be discarded by the user as long as more interesting solutions are possible. The user often prefers certain choices to others (e.g., a red color for a car to a blue color) or prefers solutions that minimize or maximize certain criteria such as price and quality. In order to provide satisfactory solutions, a configurator needs to address user preferences and user wishes. Another important problem is to provide high-level features to control different reasoning tasks such as solution search, explanation, consistency checking, and reconfiguration. We address those problems by introducing a preference programming system that provides a new paradigm for expressing user preferences and user wishes and provides search strategies in a declarative and unified way, such that they can be embedded in a constraint and rule language. The preference programming approach is completely open and dynamic. In fact, preferences can be assembled from different sources such as business rules, databases, annotations of the object model, or user input. An advanced topic is to elicit preferences from user interactions, especially from explanations of why a user rejects proposed choices. Our preference programming system has successfully been used in different configuration domains such as loan configuration, service configuration, and other problems.
The Constraint Problems usually addressed fall into one of two models: the Constraint Satisfaction Problem (CSP) and the Constraint Optimization Problem (COP). However, in many real-life applications, more functions should be optimized at the same time (Multi-Criteria Optimization, or Pareto optimality [14]), and solutions are ranked by means of a Partial Order. In this paper, we propose an algorithm for solving Pareto Optimality problems in CLP(F D). The algorithm is complete and does not make any assumption on the structure of the constraints. It exploits Point Quad-Trees for the representation of the set of solutions, in order to access the data structure efficiently.
The optimal control problem with vector-valued cost is considered. The satisfaction of necessary conditions for this problem is related to the satisfaction of such conditions for the problems with individual (component) scalar-valued costs.
Consider n jobs to be sequenced on a single machine. The objective functions to be minimized are the holding cost and the maximum tardiness. We first characterize the set of efficient points and then proceed to give a pseudo-polynomial algorithm to enumerate all these efficient points. Computational results illustrate the usefulness of the procedure.
Conference Paper
Benders Decomposition is a form of hybridisation that allows linear programming to be combined with other kinds of algorithms. It extracts new constraints for one subproblem from the dual values of the other subproblem. This paper describes an implementation ...
Conference Paper
Classical constraint problems (CSPs) are a very expressive and natural formalism to specify many kinds of real-life problems. However, sometimes they are not very exible when trying to represent real-life scenarios where the knowledge is not completely available nor crisp. For this reason, many extensions of the classical CSP framework have been proposed in the literature: fuzzy, partial, probabilistic, hierarchical. More recently, all these extensions have been unified in a general framework [1], called SCSP, which uses a semiring to associate with each tuple of values for the variables of each constraint an appropriate “degree of preference”, which can also be interpreted as a cost, or an award, or others Sometimes, however, even SCSPs are not expressive enough, since one may know his/her preferences over some of the solutions but have no idea on how to code this knowledge into the SCSP. That is, one has a global idea about the goodness of a solution, but does not know the contribution of each single constraint to such a measure. In [2] this situation is addressed by using learning techniques based on gradient descent: it is assumed that the level of preference for some solutions (the examples) is known, and it is proposed to learn, from these examples, values to be associated with each constraint tuple, in a way that is compatible with the examples Here we make the technique proposed in [2] concrete: we identify its features, and we show the results of several experiments run by choosing various values of these features.
Conference Paper
We propose a new CSP formalism that incorporates hard constraints and preferences so that the two are easily distinguished both conceptually and for purposes of problem solving. Preferences are represented as a lexicographic order over variables and domain values, respectively. Constraints are treated in the usual manner. Therefore, these problems can be solved with ordinary CSP al- gorithms, with the proviso that complete algorithms cannot terminate search after finding a feasible solution, except in the important case of heuristics that follow the preference order (lexical order). We discuss the relation of this problem rep- resentation to other formalisms that have been applied to preferences, including soft constraint formalisms and CP-nets. We show how algorithm selection can be guided by work on phase transitions, which serve as a useful marker for a rever- sal in relative efficiency of lexical ordering and ordinary CSP heuristics due to reduction in number of feasible solutions. We also consider branch and bound al- gorithms and their anytime properties. Finally, we consider partitioning strategies that take advantage of the implicit ordering of assignments in these problems to reduce the search space.