Available via license: CC BY-NC 4.0
Content may be subject to copyright.
Conditionally Acyclic CO-Networks
for Efficient Preferential Optimization
Pierre-François Gimenez a;*and Jérôme Mengin b;**
aCentraleSupélec, Univ. Rennes, IRISA, France
bIRIT, Université Paul Sabatier, CNRS, France
Abstract. This paper focuses on graphical models for modelling
preferences in combinatorial space and their use for item optimisa-
tion. The preferential optimisation task seeks to find the preferred
item containing some defined values, which is useful for many
recommendation settings in e-commerce. We show that efficient
(i.e., with polynomial time complexity) preferential optimisation is
achieved with a subset of cyclic CP-nets called conditional acyclic
CP-net. We also introduce a new graphical preference model, called
Conditional-Optimality networks (CO-networks), that are more con-
cise than conditional acyclic CP-nets and LP-trees but have the same
expressiveness with respect to optimisation. Finally, we empirically
show that preferential optimisation can be used for encoding alter-
natives into partial instantiations and vice versa, paving the way to-
wards CO-nets and CP-nets unsupervised learning with the minimal
description length (MDL) principle.
1 Introduction
Online shopping services, like video-on-demand streaming services
and product configurators for computers, cars, or kitchens, rely on
recommendation and customisation of the user experience to boost
sales [29]. Recommendations are essential in large, combinatorial
product spaces, where the number of alternatives can lead to over-
choice confusion [17]. In such a case, a user is overwhelmed by the
possibilities and cannot choose. A common tool in configurators is
optimal completion, where the configurator automatically completes
a partially configured product by maximising the product’s utility for
the user. Recommendation, and optimal completion, in particular, are
typically based on a modelling of user preferences. However, except
when the number of attributes is very small, it is intractable to repre-
sent a linear order over the space of all possible alternatives in exten-
sion, so for a decision-maker to give their preferences, some structure
is needed. Several types of graphical representations of preferences
have been studied in the literature. Combinatorial preferences can be
modelled with numerical models, such as GAI-nets [14] and ensem-
ble ranking function [12], or by ordinal graphical models, such as
lexicographic preferences trees (LP-trees [11]) and conditional pref-
erences networks (CP-nets [4]). We focus on the latter models be-
cause the optimal completion query can be answered in polynomial
time with LP-trees and acyclic CP-nets with the Forward Sweep al-
gorithm [4].
∗Corresponding Author. Email: pierre-francois.gimenez@centralesupelec.fr
∗∗ Corresponding Author. Email: Jerome.Mengin@irit.fr
LP-trees are graphical representations of a linear (total) order
based on the relative importance of attributes: to compare two al-
ternatives, their value for the most important attribute is compared.
If they are different, then we can conclude which alternative is pre-
ferred. Otherwise, the second most important attribute is used for
comparison, and so on. CP-nets are based on ceteris paribus prefer-
ences and only encode the comparison between two outcomes that
differ by the value of only one variable. Therefore, some alternatives
may be incomparable, even by transitivity. However, since CP-nets
do not encode attribute importance, they are more compact than LP-
trees. So, these two model classes have a trade-off: LP-trees describe
total orders, and CP-nets are more compact. However, CP-nets are
more general in the sense that they can represent, albeit partially,
any preference relation, whereas LP-trees can only represent (gener-
alised) lexicographic preference relations.
Our contributions can be summarised as follow:
•we show that conditional acyclic CP-nets, introduced by [28], are
exactly the CP-nets for which Forward Sweep, a polytime algo-
rithm for preferential optimisation, can be applied to;
•we introduce an even more compact version of CP-nets, called
CO-nets (conditional optimality networks), that can be used to an-
swer optimisation queries (under some structural condition) about,
in particular, but not restricted to, generalised lexicographic pref-
erences;
•we show that preferential optimisation can be used for encoding
and decoding data effectively in a Minimum Description Length
approach to learning preferences with an empirical comparison to
popular compression algorithms, paving the way towards unsuper-
vised learning of CP-nets with the (MDL) principle.
Section 2gives some background about preferences models. Sec-
tion 3proves the the characterisation of conditional acyclic CP-nets
as the set of CP-nets which efficient preferential optimisation. Sec-
tion 4presents conditional optimality networks and their associated
optimisation algorithm. Section 5studies the relationship between
different subclasses of LP-trees and CP-nets concerning the opti-
misation query. Finally, Section 6presents a new encoding and de-
coding technique based on preferential optimisation experimentally
compared to popular compression algorithms. Section 7concludes.
2 Background and notations
Combinatorial Domain We consider a combinatorial domain over
a finite set Xof discrete attributes that characterise the possible alter-
natives. Each attribute X∈Xhas a finite set of possible values X.
ECAI 2023
K. Gal et al. (Eds.)
© 2023 The Authors.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/FAIA230352
843
Xdenotes the Cartesian product of the domains of the attributes in
X, its elements are called alternatives. We often use the symbols o,
o,o1,o2,...to denote alternatives. In the following, nis the number
of attributes in X, and dis a bound on the size of the domains of the
attributes: for every X∈X,2≤|X|≤d.
For a subset Uof X, we will denote by Uthe Cartesian product
of the domains of the attributes in U, every u∈Uis an instantiation
of U, or partial instantiation (of X). If vis an instantiation of some
V⊆X,v[U]denotes the restriction of vto the attributes in V∩U.
We say that instantiations u∈Uand vare compatible if v[U∩V]=
u[U∩V], written u∼v.IfU⊆Vand v[U]=u, we say that v
extends u, also written u⊆v.
Preference relations In this paper, we consider only preference
relations that do not allow for indifference: such a preference relation
is a linear order over X, that is, a total, transitive, irreflexive binary
relation over X, often denoted with curly symbol . For alternatives
o, o∈X,ooindicates that ois strictly more preferred to o.
In many settings, one is essentially interested in finding some al-
ternative that is optimal in some restricted set of alternatives, in the
sense that no other alternative “beats” it / is strictly more preferred
in that set. In particular, in interactive configuration settings, given a
partial instantiation u∈Ualready built by a user, it can be useful to
show the user what is the best completion of u, according to her pref-
erences. We denote opt(u, )the most preferred – according to –
alternative compatible with u; it exists and is unique in a linear or-
der; in later sections we will study some incomplete representations
of preferences, and we will use the same notation opt(u, )when the
most -preferred alternative compatible with uexists and is unique.
We say that alternative o, such that o[U]=u,isu-undominated if
and only if there is no alternative osuch that o[U]=uand oo.
ois undominated if there is no alternative osuch that oo.
Graphical models It has long been observed that, given the expo-
nential size of X, for any practical purpose one must make the as-
sumption that the preference relations of interest exhibit some struc-
ture. Several models that have been studied in the AI literature to rep-
resent preference relations separate the information into three com-
ponents: 1) a graph (or sometimes a hypergraph), representing some
relationship between attributes; 2) some local information about pref-
erences, in tables associated with the nodes of the graph; 3) a rule to
aggregate the local preferences into a global binary relation over X.
We focus below on two such models: Conditional Preference Net-
works (CP-nets) and Lexicographic Preference Trees (LPTs).
CP-nets CP-nets have been introduced by [4] as a tool to
make explicit a particular kind of structure, called preferential
(in)dependence. We give below a slightly more general definition1
of preferential independence than that of [4]:
Definition. Attribute Xis said to be preferentially independent of
attribute Y=Xgiven u∈Ufor some U⊆X\{X, Y }with re-
spect to preference relation if for every x, x∈X,y,y
∈Y,v ∈
X\(U∪{X, Y }),uvxy uvxyif and only if uvxyuvxy.
We write that Xis preferentially independent of Yif Xis preferen-
tially independent of Ygiven the empty assignment u=.
1Although the initial definition of CP-nets allows for indifference in condi-
tional preference table, [4] also point out that this leads to some difficulty
in the semantics of CP-nets.
Note that preferential independence is not necessarily symmetric.
A CP-net is a structure that captures / represents the preferential inde-
pendencies inherent in a given preference relation. Figure 1a depicts
a CP-net ϕ0. More generally, a CP-net is a triple ϕ=(X,Pa,CPT),
where:
•Pa associates to every attribute X∈X, a subset Pa(X)of
X\{X}, thus Pa defines a directed graph over X, where there
is an edge (X, Y )if and only if X∈Pa(Y).Pa(Y)is the set of
parents of Y;
•CPT is a set of conditional preference tables, one table CPT(X)
for every attribute X:CPT(X)contains, for every instantiation u
of Pa(X),arule u:>, where >is a linear order over X.
Example 1. For the CP-net ϕ0of figure 1a, Pa(A)={} and
CPT(A)={a>¯a},Pa(B)={A, C}and CPT(B)={a∨¯c:b>
¯
b, ¯ac :¯
b>b}.
Let us call swap any pair of alternatives that have identical values
for every attribute except one. A CP-net ϕorders every swap {o, o}
as follows: let Xbe the only attribute such that o[X]=o[X],
let u=o[Pa(X)] = o[Pa(X)], let u:>be the corresponding
rule in CPT(X), then (o, o)is a worsening swap (w.r.t. ϕ) if and
only if o[X]>o
[X]. The transitive closure of all the worsening
swaps sanctioned by ϕis, by definition, transitive, and we denote
it by ϕ. It is not necessarily irreflexive, and not complete in gen-
eral. Figure 1b depicts ϕ0: edges o orepresent the worsen-
ing swaps sanctioned by ϕ0. Some of them are redundant since im-
plied, by transitivity of ϕ0, by other swaps: for instance the fact
that abc ϕ0¯abc is implied by the worsening swaps (abc, ab¯c),
(¯ab¯c, ab¯c),(¯ab¯c, ¯abc).
CP-net induced by a preference relation Given a preference re-
lation (linear order) over X, it is possible to define a CP-net
ϕ=(X,Pa,CPT)that captures the preferential independen-
cies between attributes that are inherent in : for every pair of at-
tributes X, Y ∈X,X∈Pa(Y)if and only if Yis not preferen-
tially independent of X(w.r.t. ). Then, for every X∈X and every
u∈Pa(X),CPT(X)contains the rule u:>, where >is the
linear order over Xsuch that x>x
if and only xuv xuv for
every v∈X−(Pa(X)∪{X}).
Example 2. The CP-net induced by the linear order abc ab¯c
a¯
b¯ca¯
bc ¯ab¯c¯a¯
b¯c¯a¯
bc ¯abc is ϕ0. For instance, whatever
the values xand ygiven to Band Crespectively, it holds that axy
¯axy, thus Adoes not preferentially depend on B, nor on C, and
CPT(a)={a>¯a}.
Lexicographic Preference Trees LP-trees generalise lexico-
graphic orders, which have been widely studied in decision mak-
ing [10]. As an inference mechanism, they are equivalent to search
trees used by [5], and formalised by [27, 28]. As a preference repre-
sentation, and elicitation, language, slightly different definitions for
LP-trees have been proposed by [2, 7, 9].
In the formal model proposed by [2], a lexicographic preference
tree, or LP-tree for short, is composed of two parts: a rooted tree
indicating the relative importance of the attributes, and tables indi-
cating how to compare outcomes that agree on some attributes. An
example of a LP-tree is depicted in Figure 1c. Each node of the im-
portance tree is labelled with an attribute X∈X, and is either a
leaf of the tree, or has one single, unlabelled outgoing edge, or has
P.-F. Gimenez and J. Mengin / Conditionally Acyclic CO-Networks for Efficient Preferential Optimization844
A
a>¯a
B
a∨¯c:b>¯
b
¯ac :¯
b>b
C
¯a∨¯
b:¯c>c
ab :c>¯c
(a) A CP-net ϕ0.
abc
ab¯c
a¯
b¯c
a¯
bc
¯ab¯c
¯a¯
b¯c
¯a¯
bc
¯abc
(b) A graphical rep-
resentation of ϕ0.
A
B
C
C
B
a¯a
a>¯a
b>¯
b
b:c>¯c
¯
b:¯c>c
¯c>c
¯c:b>¯
b
c:¯
b>b
abc ab¯ca¯
b¯ca¯
bc
¯ab¯c¯a¯
b¯c¯a¯
bc ¯abc
(c) LP-tree ψ0and the cor-
resp. pref. relation ψ0
Aa>¯a
Bb>¯
b
C¯c>c
ab¯cabc a¯
b¯ca¯
bc
¯ab¯c¯abc ¯a¯
b¯c¯a¯
bc
(d) A linear LP-tree and the
corresponding pref. relation
A
a
B
a∨¯c:b
¯ac :¯
b
C
¯a∨¯
b:¯c
ab :c
(e) A CO-net ϕ∗
0.
Figure 1: Examples of a CP-net, two LP-trees and a CO-net.
|X|outgoing edges, each one being labelled with one of these val-
ues. No attribute can appear twice in a branch. For a given node N,
Anc(N)denotes the set of attributes that label nodes above N. The
values of attributes that are at a node above Nwith a labelled out-
going edge influence the preference at N. We denote by Inst(N)the
set of nodes above Nwith a labelled outgoing edge and inst(N)the
tuple of values of the edge labels between the root and N. Also, we
define NonInst(N)=Anc(N)Inst(N).
Moreover, one conditional preference table CPT(N)is associated
to each node Nof the tree: if Xis the attribute that labels N, then
the table contains rules of the form v:>, where >is a linear order
>over X, and v∈Vfor some V⊆NonInst(N). The rules in
CPT(N)must be consistent: given two rules v:>and v:>,it
must be the case that vand vare not compatible. Moreover, we im-
pose that LP-trees are complete: every attribute must appear exactly
once on every branch, and for any u∈NonInst(N), there must be
one rule v:>∈CPT(N)such that uand vare compatible.
Every LP-tree ϕinduces a linear order over X, denoted ϕ: for
any node Nlabelled by X, consider a pair of alternatives oand o
such that o[Inst(N)] = o[Inst(N)] = inst(N)and o[X]=o[X]:
Nis said to decide the pair (o, o); furthermore, there must be unique
rule v:>in CPT(N)such that o[NonInst(N)] = o[NonInst(N)]
is compatible with v: then oϕoif and only if o[X]>o
[X].
Figure 1c also shows the preference relation induced by the de-
picted LP-tree ψ0, which is also that of example 2, so ϕ0=ϕψ0.
An LP-tree is said to have unconditional preferences if, for a given
attribute X, every node labelled with Xcontains a unique, identical
rule :>. We denote UP-LPT the class of such LP-trees. In partic-
ular, figure 1d depicts a linear LP-tree: it has unconditional prefer-
ences, and a single branch. It is a strong restriction on expressiveness:
linear LP-trees represent the usual, unconditional lexicographic pref-
erence relations.
Brauning et al. [7, 6] extend the expressiveness of LP-trees by al-
lowing to label a node with a set of attributes, considered as a single
high-dimensional attribute; we do not consider such LP-trees here.
3 Conditional acyclicity and the Forward Sweep
procedure
Boutilier et al. [4] proved that, when a CP-net ϕis acyclic, it has a
unique undominated alternative, which can be computed in polyno-
mial time with the Forward Sweep procedure. We prove in this sec-
tion that this approach characterises the class of conditionally acyclic
CP-nets, introduced by [28], that strictly contains the class of those
CP-nets, the graph of which is acyclic.
Definition ([28]).CP-net ϕis conditionally acyclic if there is some
LP-tree ψsuch that ψextends ϕ.
The CP-net ϕ0of figure 1a is conditionally acyclic, since ϕ0is
contained in ψ0where ψ0is the LP-tree of figure 1c. In fact, ψ0is
the only LP-tree which has ϕ0as induced CP-net.
We introduce the following notation: given CP-net ϕ=
(X,Pa,CPT), for every attribute X∈Xand every partial instanti-
ation u∈Ufor some U⊆X,Paϕ(X|u)is the set of attributes of
which Xis not independent given u.
For the CP-net on figure 1a, although Pa(C)={A, B},
Pa(C|¯a)=∅: whatever the value of B, when A=¯a, the order-
ing over Cis: ¯c≥c.
Algorithm 1below is a generic Forward Sweep algorithm. It ini-
tialises an empty instantiation inst at line 1, and iteratively expands
inst by choosing an attribute at line 3, choosing a value for that
attribute at line 4, and adding this value to inst.FSgen is non-
deterministic, as it leaves open the choices of attribute and value at
every iteration.
Algorithm 1: Generic Forward Sweep (FSgen) from [4]
Data: ϕ, a CP-net
Result: o∈X
1inst ←
2while possible do
3choose X∈X−Var(inst), s.t. Paϕ(X|inst)=∅
4choose x∈X
5inst ←inst ·x
6if Var (inst)=Xthen return inst;
7else return FAILURE;
We will adapt this algorithm in later sections to perform various
tasks of interest, and in particular to compute the undominated alter-
native of some conditionally acyclic CP-net as the Forward Sweep
procedure proposed by [4].
We close this section with a proposition that shows that algorithm
FSgen characterises the class of conditionally acyclic CP-nets.
Proposition 1. Algorithm FSgen always succeeds, whatever the
choices of variables or values at lines 3and 5, if and only if ϕis
conditionally acyclic.
In particular, if the input CP-net ϕis not conditionally acyclic,
there is at least one sequence of choices at lines 3and 4that will
lead to failure: at some point, there is no more X∈X−Var (inst),
P.-F. Gimenez and J. Mengin / Conditionally Acyclic CO-Networks for Efficient Preferential Optimization 845
s.t. Paϕ(X|inst)=∅. Note that the algorithm is not intended to be
a practical tool to check if a given CP-net is conditionally acyclic,
as checking that the algorithm never fails requires a number of trials
that is exponential in the n: this problem is PSPACE-complete [28].
Proof. [28] proves that a CP-net ϕis equivalent to a set of CP-
statements that essentially corresponds to the union of the local pref-
erence rules in the conditional preference tables in ϕfor all attributes.
[8] gives an algorithm that computes a complete LP-tree that is con-
sistent with a given conditionally acyclic sets of CP-statements. The
algorithm is as follows: 2
Algorithm 2: Build complete LP tree (from [8])
Data: ϕset of CP-statements
Result: LP-tree ψcomplete, s.t. ψ⊇
ϕ,orFAILURE
1ψ←{an unlabelled root node}
2while ψcontains some unlabelled node do
3choose unlabelled node Nof ψ
4(X, >)←chooseAttribute(N,ϕ)
5if X=FAILURE then return FAILURE;
6label Nwith (X, >)
7if Anc(N)∪X=Xthen
8for x∈Xdo
9add new unlabelled node to ψ
10 attach this node to Nwith edge labelled with x
11 return ψ
Essentially, given a yet unlabelled node N, the algorithm calls at
step 4the helper function chooseAttribute that returns an attribute
Xand a linear order over >over X, and expands the tree at step 8
with a child for Nfor every value in X.[8] prove that the algorithm
succeeds if and only if ϕis conditionally acyclic.
The reader is referred to [8] for details about the helper function
chooseAttribute in the context of sets of CP-statements. It is not
difficult to see that in the case of CP-nets, this condition amounts to
picking an attribute Xsuch that all rules u:>∈CPTϕ(X), where
uis consistent with the instantiations made above N, specify the
same order over X; that is, this condition amounts to the fact that
Pa(X|inst)=∅.
Thus line 4in the algorithm above is similar to lines 3and 4in
FSgen, except that in FSgen a single value xis associated to Xat
line 4, instead of a linear order >at line 4in the above algorithm.
Moreover, whereas the above algorithm expands the current node
at line 8with one subtree for every possible value of the chosen at-
tribute X,FSgen only expands inst at line 5in a unique way with the
chosen value x.
Therefore, building a tree with the above algorithm amounts to
several runs of the FSgen algorithm to build all branches in paral-
lel. Thus, FSgen always succeeds, for all possible choices at lines 2
and 5, if and only if the above algorithm succeeds, if and only if ϕis
conditionally acyclic.
4 Conditional Optimality Networks
As far as optimisation is concerned, it turns out that the only informa-
tion that is needed, in a conditionally acyclic CP-net, is the optimal
2The algorithm proposed by [8] is more general in that it takes another input
which is a bound on the number of attributes allowed at each node of the
tree; the algorithm that we give here is a restriction where this bound is set
to 1, that, we build LP-trees with exactly one attribute at each node.
values in the preference tables. Taking that into account, in this sec-
tion, we define a “lightweight” version of CP-nets, that we call Con-
ditional Optimality Networks, or CO-nets for short. They are similar
to CP-nets, but only contains information about optimality. We will
prove that this information is sufficient to reason about a given linear
order, provided that the induced CO-net is conditionally acyclic.
Definition. A CO-net over Xis a tuple N=(X,Pa,COT), where
Pa defines a directed graph over X, and where COT is a conditional
optimality table, such that, for every attribute X, and every u∈
Pa(X),COT(X, u)contains a single values of X, the optimal value
for Xgiven u.
For example, figure 1e shows the CO-net induced by the LP-tree
ψ0. Given a preference relation over X, there is a unique CO-net
that captures the information about conditional optimality contained
in , let us denote it by ϕ∗
=(X,Pa,COT):X∈Pa(Y)if Yis
not independent of Xfor optimality, according to the next definition,
and COT in ϕ∗
contains, for every attribute Y∈Xand every u∈
Pa(Y), the only undominated value of Y,givenu.
Definition. Attribute Yis said to be independent for optimality of
attribute X=Ygiven u∈Ufor some U⊆X\{X, Y }with
respect to preference relation if for every x, x∈X,y ∈Y,v ∈
X\{X, Y },uvxy is uvx-undominated if and only if uvxyis uvx-
undominated. We write that Yis independent of Xfor optimality if
Yis independent of Xfor optimality given the empty assignment
u=.
Note that, given a preference relation , preferential independence
implies independence for optimality. Therefore, the graph of the CP-
net induced by contains the graph of the CO-net induced by ;
furthermore, since, in the induced CP-net, the CPTs contain linear
orders over domains of the variables, they also contain the informa-
tion about optimal values. Thus the induced CO-net is weaker than
the induced CP-net. Obviously, in the case where all variables are
binary, the induced CO-net is equivalent to the induced CP-net.
We now consider an instantiation of the Forward Sweep algorithm
that specifically computes the only undominated completion of a par-
tial instantiation u, given a conditionally acyclic CP-net or CO-net.
Algorithm 3: Forward Sweep for optimisation (FSopt)
Data: ϕ=CP-net or CO-net, U⊆X,u∈U
Result: opt(u, ϕ)
1inst ←
2while possible do
3choose X∈X−Var(inst), s.t. Paϕ(X|inst)=∅
4if X∈Uthen inst ←inst ·u[X];
5else inst ←inst ·opt(X|inst);
6if Var (inst)=Xthen return inst;
7else return FAILURE;
Definition. We say that a CO-net is conditionally acyclic if the FSopt
always succeeds.
The next proposition shows that in the case where the CO-net in-
duced by some preference relation is conditionally acyclic, then it
contains all necessary information to compute opt(·,).
Proposition 2. Let be a preference relation over X. If the induced
CO-net, ϕ∗
, is conditionally acyclic, then, for every u∈Ufor every
U⊆X,opt(u, )=FSopt (u, ϕ∗
).
P.-F. Gimenez and J. Mengin / Conditionally Acyclic CO-Networks for Efficient Preferential Optimization846
Proof. The proof is similar to that of lemma 3 in [4]. Let U⊆X,
u∈U. By definition of conditional acyclicity, we know that FSopt
always succeeds; let o∗(u)be the alternative returned by the algo-
rithm when called with (U, u). Let Xibe the attribute chosen at the
ith iteration, x∗
ithe value taken by the variable Xiat the ith iteration,
and instithe value of inst after the ith iteration. We have inst1=x∗
1,
inst2=x∗
1x∗
2,..., and instn=o∗(u). Let obe another alternative
such that o[U]=u. Define sequence of alternatives (oi)i=n,...,1
as follows: on=o∗(u), and for every i=n, n −1,...,1:
oi−1=oiexcept if Xi/∈U, in which case oi−1is identical to
oiexcept that oi−1[Xi]=o[Xi]. Then o0=o. Moreover, for ev-
ery i=n, n −1,...,1, since o∗[Xi]=xiis the optimal value for
Xigiven insti−1, we have that o∗[Xi]>o[Xi]and oioi−1,or
o∗[Xi]=o[Xi]and oi=oi−1; because o=o∗(u), for at least one
iit must be the case that oioi−1. Thus o∗(u)o0.
The important point here is that, if one wants to elicitate or learn
preferences to support a decision maker in tasks that only involve op-
timisation queries, then all what is needed is a conditionally acyclic
CO-net. Note that the above result does not hold anymore for non-
conditionally acyclic CO-nets, as shown on the next example.
Example 3. Consider two binary attributes Aand B, and the fol-
lowing preference relation: ab ¯a¯
ba¯
b¯ab, with one un-
dominated alternative ab. The induced CO-net, depicted below, is
not conditionally ayclic; the partial order ϕdefined by the in-
duced CP-net ϕhas two undominated alternatives : ab and ¯a¯
b.
The induced CO-net:
A
b:a
¯
b:¯aBa:b
¯a:¯
b
The induced CP-net:
A
b:a>¯a
¯
b:¯a>a Ba:b>¯
b
¯a:¯
b>b
Besides, since the independence for optimality is weaker than the
preferential independence, it is possible for cyclic CP-nets to have
associated conditionally acyclic CO-nets. Remark that this can only
happen if there exist non-binary attributes.
Example 4. Consider two attributes A, B with A={a1,a
2,a
3},
B={b1,b
2}, and the following preference relation: a1b1a1b2
a3b2a2b2a2b1a3b1. The induced CP-net is not condition-
ally acyclic but the induced CO-net is acyclic.
Induced CO-net:
A
a1B
a1:b1
a2:b2
a3:b2
Induced CP-net:
A
b1:a1>a2>a3
b2:a1>a3>a2
B
a1:b1>b2
a2:b2>b1
a3:b2>b1
5 Expressiveness
In this section, we explore the relationship between some classes of
CO-nets and the classes of preference relations that induces them.
Separable preferences We start with some results about preferen-
tial separability. A preference relation is said to be separable (resp.
opt-separable) if for every pair of attributes (X,Y ),Xis preferen-
tially independent of Y(resp. independent of Yfor optimality).
By definition of CP-nets (resp. CO-nets), a preference relation
is separable (resp. opt-separable) if and only if the induced CP-net
(resp. CO-net) has no edge. Moreover, the preference relation de-
fined by an LP-tree ψis separable if and only ψ∈UP-LPT; in other
words: the preference relation defined by ψis separable if and only
if ψhas unconditional rules. Conversely, given a CP-net or a CO-net
ϕwith no edge, there are n!linear LP-trees that induce ϕ(defined by
the possible orderings of the attributes along the single branch of lin-
ear LP-trees), and even more LP-trees with unconditional preferences
but conditional importance (that is, LP-trees with several branches).
Note that there are also preference relations that induce ϕand that
are not represented by any LP-tree.
In settings where one needs to learn the preferences of a deci-
sion maker for the sole purpose of optimisation, and if it can be as-
sumed that the decision-maker’s preferences are separable, the above
remark, combined with that of Prop. 2formalises the unsurprising
fact that one only has to learn the optimal value for every attribute,
independently from the values of the other attributes.
Importantly, it should be noted that this does not extend to settings
where one would need to compare two alternatives: even if the un-
known preference relation is separable, comparing two alternatives
may require information that is not captured by the induced CP-net.
Unconditional importance It has long been recognised that CP-
nets alone cannot capture all the information needed to represent
most preference relations. LP-trees on the other hand completely
represent some preference relations, with some information about
the relative importance of the attributes: in a given branch, every at-
tribute is more important than all attributes below it in that branch,
for comparing alternatives that are decided in that branch. Figure 1c
shows a LP-tree where the relative importance of attributes Band C
are conditioned by the value for attribute A: for comparing alterna-
tives that have value afor A,Bis more important than C;butCis
more important than Bfor comparing alternatives that have value ¯a
for A. Let UI-LPT denote the class of LP-trees with unconditional
importance, that is, where the ordering of the attributes is the same
in every branch; such an LP-tree is always equivalent to an LP-tree
with a single branch.
Proposition 3. Given ψ∈UI-LPT, the CP-net and CO-net induced
by ψare acyclic; and, given an acyclic CP-net or CO-net ϕ, there
exists some ψ∈UI-LPT such that ϕis induced by ψ.
Proof. Consider some LP-tree ψ∈UI-LPT with unconditional or-
der of importance X1,X
2,...,X
n: in every branch, the node at
depth iis labelled with attribute Xi, and the preference rules at that
node can only depend on X1,...,X
i−1; thus in the induced CP-net /
CO-net, Pa(Xi)⊆{X1,...,X
i−1}. Therefore the graph is acyclic.
Conversely, given an acyclic CP-net ϕ, consider any topological or-
dering X1,...,X
nof the attributes, an LP-tree ψsuch that ψin-
duces ϕcan be built that has a single branch, with the nodes labelled
from root to bottom with the attributes with in order X1,...,X
n,
and with the same CPT at node labelled with Xiin ψas at node la-
belled with Xiin ϕ.Ifϕis a CO-net, an LP-tree can be constructed
in the same way, but one must also complete the preference table at
node labelled with Xiwith any local orderings over Xithat have the
correct optimal value, given the values of the parents of Xiin ϕ.
Here again, given an acyclic CP-net / CO-net ϕ, there are prefer-
ence relations that induce ϕbut cannot be represented by any LP-
tree. However, the result above shows that in settings where one
needs to learn the preferences of a decision maker for the sole pur-
pose of optimisation, and if it can be assumed that the decision-
maker’s preferences can be represented with an LP-tree with un-
conditional importance, one can safely search for an acyclic CO-net.
This is important from the point of view of machine learning, since
it puts a strong bias on the search space, and also limits the amount
P.-F. Gimenez and J. Mengin / Conditionally Acyclic CO-Networks for Efficient Preferential Optimization 847
of information that must be induced. For example, for nbinary at-
tributes, there are 2ndifferent CO/CP-nets but 2n×n!linear LP-
trees, and, in a conditionally acyclic CO/CP-nets, there are nnodes
and at most O(n2)edges, while in a LP-tree there are at most O(2n)
nodes and edges.
6 Forward Sweep for encoding / decoding data
Learning graphical models of preferences has mostly been attempted
in settings where the input data is a set of pairwise comparisons, that
is, pairs of alternatives (o, o)where ois deemed preferred to o.In
such settings, one attempts to learn a model ϕsuch that oϕo:
this leads to an empirical loss function that counts the number of
"misordered" pairs in the input data. In the case of CP-nets, checking
if oϕoholds is an NP-complete problem. Besides, and CP-nets
do not define a total relation, thus this empirical loss function is ill-
defined.
In order to alleviate these problems, works on learning CP-nets
often put some restrictions on the graphical structure of the CP-nets,
like a bound on the number of parents of each attribute [21, 16, 1,
e.g.], and/or restrict the input to some simple types of pairwise com-
parisons [18, 19, e.g.]. Learning LP-trees is easier [2, 7, 6, 22], but at
the cost of a significant loss in expressiveness.
Fargier et al. [9] propose to learn a preference relation from a dif-
ferent kind of data: a set of alternatives that have been chosen by
users of some decision-aid system. The idea is that the commoner a
value in this set of chosen alternatives, the more it is likely to charac-
terise the preferred alternative(s). We now show how this idea can be
combined with the Minimal Description Length induction principle
to enable a promising new way of learning CO-nets.
The idea of the MDL principle for machine learning is that, given
some data Dand a class of possible models that may “explain” D,
one should choose the model Hthat enables the lossless compres-
sion of Dwith minimum size [15]. Formally, if L(D|H)denotes the
length of the representation that permits to retrieve Dknowing H,
one can define the minimum description length for Dgiven a class of
models Has L(D) = minH∈H L(H)+L(D|H). MDL has been
successfully applied in the unsupervised learning of many classes of
models, such as Bayesian networks [25, 20], causal networks [23],
formal grammars [13], and applied to data mining in graphs [26].
When His a CO-net, the size of His simply the sum of the size
of the graph and of the size of the conditional optimality table:
L(H)=LN(|X |)+
N∈X LN(|Pa(N)|)+log
2|X | − 1
|Pa(N)|
+|Pa(N)|log2|N|(1)
where LNis the length of the Rissanen universal integer encod-
ing [24], defined as LN(n)=log
∗(1 + n)+logc0where log∗
is the expansion log n+loglogn+..., including only the positive
terms, and c0is a constant, and where, slightly abusing notation, |N|
denotes the domain size of the attribute labelling N. These terms
encode, in order: the total number of nodes, and for each node, the
number of its parents, its set of parents and the optimal value for each
value of its parents.
As a preliminary step towards the application of the MDL princi-
ple to learn CO-nets, we propose in the remainder of this section a
simple way of “coding” alternatives, given a CO-net. The approach
makes use of the optimisation query. Given a strict partial order
, consider an alternative o, and a partial instantiation usuch that
Algorithm 4: Forward Sweep for encoding (FSenc)
Data: ϕ=CO-net, o∈X
Result: code(o, ϕ)
1inst ←;code ←
2while possible do
3choose X∈X−Var(inst), s.t. Paϕ(X|inst)=∅
4if o[X]=opt(X|inst)then code ←code ·o[X];
5inst ←inst ·o[X]
6if Var (inst)=Xthen return code;
7else return FAILURE;
opt(u, )=o:u, being a partial instantiation, is shorter than the al-
ternative o, so it can be seen as short code for o– if there is a practical
algorithm to retrieve ofrom u, which is the case with conditionally
acyclic CO-nets since we know from proposition 2that we can com-
pute opt(u, )with the FSopt algorithm. Given o, there will be in
general several partial instantiations usuch that opt(u, )=o,but
if we can uniquely define one such partial instantation for every o,
then we have a way of encoding alternatives.
In the following, opt−1denotes the inverse function of opt:given
an alternative o,opt−1(o, )={u|opt(u, )=o}.
Definition. Let be a strict partial order over X. Suppose that for
every o,opt−1(o, )contains a unique uwith minimal size. Then
we say that is uniquely encoding, and we define code(o, )to be
this unique minimal uin opt−1(o, ).
Example 5. Consider again the preference that corresponds to the
LP-tree of figure 1c: abc abcabcabc abc abc
abc abc. Then opt(ab, )=abc, because it is the most pre-
ferred alternative compatible with ab. In fact, opt−1(abc, )=
{a, ab, ac, abc}. Therefore, code(abc, )=a. It can be checked
that is uniquely encoding.
The linear order ab ¯a¯
ba¯
b¯ab of example 3is not uniquely
encoding: opt−1(¯a¯
b)={¯a¯
b, ¯
b, ¯a}.
We already know that if a preference induces a conditionally
acyclic CO-net, then the opt / decoding function can be computed
with the FSopt algorithm. The main result of this section is that in
this case, another instance of the Forward Sweep procedure, called
FSenc, and depicted in Algorithm 4, can be used to compute the code
function. We illustrate it on an example.
Example 6. Consider again the linear order that corresponds to the
LP-tree ψ0of figure 1c, whose induced CO-net is depicted on fig-
ure 1e. Let o=abc, and suppose we want to compute code(abc, )
with algorithm FSenc. At the first iteration of the “while” loop,
the only variable that has no parent is A, with optimal value a=
o[A]=a, thus code ←a, and inst ←a. At the next iteration,
Pa(C|a)=∅, with optimal value c=o[C],socode is not updated,
and inst ←ac. At the last iteration, the optimal value for Bgiven
inst is b=o[B], thus the algorithm returns a.
Proposition 4. If the CO-net induced by a given preference rela-
tion is conditionally acyclic, then is uniquely encoding, and the
coding function can be computed with the FSenc algorithm in Algo-
rithm 4.
Proof. Let obe any alternative, and let ube the output of Algo-
rithm 4for o. Let us show that opt(u, )=o. As shown ear-
lier, opt(u)can be computed with Algorithm 3. Remark that, in
P.-F. Gimenez and J. Mengin / Conditionally Acyclic CO-Networks for Efficient Preferential Optimization848
Dataset LZMA PPMd bzip2 DEFLATE zstd LZ4 zpaq brotli separable CO-net CO-net
Small 95.80% 97.90% 97.46% 94.50% 96.22% 93.51% 94.46% 96.42% 92.19% 97.03%
Medium 96.04% 97.98% 97.71% 94.82% 96.45% 93.94% 95.21% 96.58% 91.21% 97.12%
Big 96.40% 97.93% 97.64% 94.90% 97.04% 94.29% 94.73% 97.23% 93.41% 97.67%
Table 1: Space savings for various compression algorithms on the three Renault datasets.
this decoding algorithm, the value of u[X]only affects the output
if u[X]=opt(X|inst): otherwise, no matter whether uis defined
or not on X,xwill have the same value. In this regard, Algorithm 4
simply computes the subset of othat have an impact on the decod-
ing, i.e., such that o[X]=opt(X|o). Therefore, if we denote uthe
output of Algorithm 4for o, then opt(u, )=o.
Now, let us show that u(defined on U) is the unique minimal
instance such that opt(u)=o. Let v=u(defined on V) such
that opt(v)=o. Consider the traces of Algorithm 3for uand v.
First, let us remark there exists an order of attributes selection that
is compatible with both the optimisation of uand v. Indeed, since
opt(u)=opt(v), if we denote instu(resp. instv) the content of
variable inst at any point of the execution of Algorithm 3applied to u
(resp. v), then instu⊆opt(u)and instv⊆opt(v). Therefore, instu
and instvare always compatible. For this reason, the set of available
attributes for X, that only depends on Paϕ(X|inst), is the same for
uand for v. Let us denote Lsuch an attributes selection order. Let
us now prove that opt(u[V]) = opt(u)by comparing the execution
of Algorithm 4on u,vand u[V], denoted Tu,Tvand Tu[V], for this
attributes selection order L. Let Xibe the variable chosen at iteration
i. Since uand vlead to the same optimal alternative, it means that the
same value of xiis chosen at each iteration. Either u[Xi]=v[Xi],
and therefore u[Xi]=u[V][Xi], so the same value xiis chosen in
Tu[V]and the execution still have the same values of inst.Ifu[Xi]=
v[Xi], and because u∼v, at least one of uor vis not defined on
Xi. Let us assume (without loss of generality) that uis not defined on
Xi. In that case, u[V]is not defined on Xieither and Tu[V]the same
value for xias Tu. Therefore, the execution still have the same values
of inst. By recurrence, since the values of inst are the same at each
point of the executions, then opt(u[V]) = opt(u). By assumption, u
is minimal for cardinality. Therefore UV,souvand |u|<|v|,
so uis the unique minimum for cardinality.
We are now able to compute the length of the compression of data
D, given some CO-net Hthat is uniquely encoding:
L(D|H)=
o∈DLN(|code(o, H)|)+log
2|X |
|code(o,H)|
+
x∈code(o,H)
log2(|X|−1)(2)
For each outcome oof the dataset, the terms encode, in order: the
length of the minimal code of o, the set of variables that are assigned
in this code, and the value for each attribute.
Compression experiment While MDL is typically used for model
selection and not compression, we propose to experimentally assess
the relevance of CO-nets and separable CO-nets in the context of
preference representations by comparing the length of the codes of
several datasets with implementations of efficient compression al-
gorithms configured for the highest compression ratio. Source code,
data and CO-nets are available online3.
3https://github.com/PFGimenez/co-net-ecai23
The datasets are real-world sales history of cars from the Renault
car manufacturer: "Small" has 48 variables and is 2.7MB, "Medium"
has 44 variables and is 1.4MB and "Big" has 87 variables and is
3.2MB. These files are in a csv format which is a text format. They
can be easily compressed, as shown by the space savings in Table 1.
The three algorithms with the highest space saving are PPMd, bzip2,
and our encoding based on a CO-net. Separable CO-nets have the
lowest space saving of all methods in this Table, due to they very lim-
ited expressivity, but they still achieve 90%+ space saving. Table 1
does not include snappy and LZ77 algorithms because of their poor
compression efficiency: they obtained about 70% and 85% space
saving, respectively.
While a direct comparison is unfair because MDL is a theoretical
tool that does not need to comply with the technical constraints of file
formats and compression speed, we can still conclude that CO-nets
can represent the regularities of real-world datasets with an efficiency
similar to the most efficient compression algorithms.
7 Conclusion
While ubiquitous in many industrial applications, the preferential
optimisation query has been little studied on its own. By focusing
on graphical preferences models with efficient (polytime) preferen-
tial optimisation, CP-nets, and LP-trees, we showed that these two
popular models are, in fact, as expressive for this query, and that
conditionally acyclic CP-nets are exactly the CP-nets where Forward
Sweep can be applied. Besides, we proposed an even more compact
graphical model class, the CO-nets, that can be used for optimisation
even though they contain little information about the actual linear
order of preferences.
The method proposed by [9] to learn LP-trees can only be ap-
plied to models representing total orders, where the rank is defined.
LP-trees are useful models for preferences representation, but, as we
demonstrated here, one does not need attribute importance when the
optimisation query is the only query of interest. On the other hand,
CP-nets are just as expressive and much more succinct. However, the
learning approach of [9] is inapplicable to CP-nets because they do
not represent total orders.
In Section 6, we detailed how Forward Sweep can be used for
encoding and decoding alternatives for a given CO-net. Such pro-
cedures can be used with the MDL framework to learn CP-nets or
CO-nets by minimizing the MDL score. In this context, Prop. 1is
especially important since it shows that conditionally acyclic CP-
nets (resp. CO-nets) are exactly the CP-nets (resp. CO-nets) where
the efficient Forward Sweep algorithm can be used for encoding and
decoding. Polytime encoding and decoding is paramount for scal-
able MDL learning, generally based on local greedy search. This is
a preliminary step towards unsupervised CP-nets and CO-nets learn-
ing with minimal description length (MDL) by adapting the polytime
Forward Sweep algorithm to encoding and decoding.
Acknowledgements We thank anonymous reviewers for their
valuable comments. This work has benefited from the AI Interdisci-
P.-F. Gimenez and J. Mengin / Conditionally Acyclic CO-Networks for Efficient Preferential Optimization 849
plinary Institute ANITI. ANITI is funded by the French "Investing
for the Future – PIA3" program under grant agreement ANR-19-
PI3A-0004. This work has also been supported by the PING/ACK
project of the French National Agency for Research, grant agreement
ANR-18-CE40-0011.
References
[1] Thomas E. Allen, Cory Siler, and Judy Goldsmith, ‘Learning tree-
structured cp-nets with local search’, in Proceedings of the Thirtieth
International Florida Artificial Intelligence Research Society Confer-
ence (FLAIRS 2017), eds., Vasile Rus and Zdravko Markov, pp. 8–13.
AAAI Press, (2017).
[2] Richard Booth, Yann Chevaleyre, Jérôme Lang, Jérôme Mengin, and
Chattrakul Sombattheera, ‘Learning conditionally lexicographic pref-
erence relations’, in Proceedings of the 19th European Conference on
Artificial Intelligence (ECAI 2010), eds., Helder Coelho, Rudi Studer,
and Michael Wooldridge, volume 215 of Frontiers in Artificial Intelli-
gence and Applications, p. 269–274. IOS Press, (2010).
[3] Craig Boutilier, ed. Proceedings of the 21st International Joint Confer-
ence on Artificial Intelligence (IJCAI’09), 2009.
[4] Craig Boutilier, Romen I. Brafman, Carmel Domshlak, Holger H. Hoos,
and David Poole, ‘CP-nets: a tool for representing and reasoning with
conditional ceteris paribus preference statements’, Journal of Artificial
Intelligence Research,21, 135–191, (2004).
[5] Craig Boutilier, Ronen I. Brafman, Carmel Domshlak, Holger H. Hoos,
and David Poole, ‘Preference-based constrained optimization with cp-
nets’, Computational Intelligence,20(2), 137–157, (2004).
[6] Michael Bräuning, Eyke Hüllermeier, Tobias Keller, and Martin Glaum,
‘Lexicographic preferences for predictive modeling of human deci-
sion making: A new machine learning method with an application
in accounting’, European Journal of Operational Research,258(1),
295–306, (2017).
[7] Michael Bräuning and Eyke Hüllermeyer, ‘Learning conditional lexico-
graphic preference trees’, in Preference Learning: Problems and Appli-
cations in AI. Proceedings of the ECAI 2012 workshop, eds., Johannes
Fürnkranz and Eyke Hüllermeyer, p. 11–15, (2012).
[8] Hélène Fargier and Jérôme Mengin, ‘A knowledge compilation map for
conditional preference statements-based languages’, in Proceedings of
the 20th International Conference on Autonomous Agents and Multi-
agent Systems (AAMAS ’21), eds., Frank Dignum, Alessio Lomuscio,
Ulle Endriss, and Ann Nowé, pp. 492–500. ACM, (2021).
[9] Hélène Fargier, Pierre Francois Gimenez, and Jérôme Mengin, ‘Learn-
ing lexicographic preference trees from positive examples’, in Proceed-
ings of the Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI 2018), eds., Sheila A. McIlraith and Kilian Q. Weinberger, p.
2959–2966. AAAI Press, (2018).
[10] Peter C. Fishburn, ‘Lexicographic orders, utilities and decision rules: A
survey’, Management Science,20(11), 1442–1471, (1974).
[11] Niall M Fraser, ‘Applications of preference trees’, in Proceedings of
SMC’93, pp. 132–136, (1993).
[12] Yoav Freund, Raj D. Iyer, Robert E. Schapire, and Yoram Singer, ‘An
efficient boosting algorithm for combining preferences’, Journal of Ma-
chine Learning Research,4, 933–969, (2003).
[13] Minos Garofalakis, Aristides Gionis, Rajeev Rastogi, Sridhar Seshadri,
and Kyuseok Shim, ‘Xtract: learning document type descriptors from
xml document collections’, Data mining and knowledge discovery,7,
23–56, (2003).
[14] Christophe Gonzales and Patrice Perny, ‘GAI networks for utility elic-
itation’, in Proceedings of KR’04, pp. 224–234, (2004).
[15] Peter Grünwald, ‘Model selection based on minimum description
length’, Journal of mathematical psychology,44(1), 133–152, (2000).
[16] Joshua T. Guerin, Thomas E. Allen, and Judy Goldsmith, ‘Learning cp-
net preferences online from user queries’, in Proceedings of the Third
International Conference on Algorithmic Decision Theory(ADT 2013),
eds., Patrice Perny, Marc Pirlot, and Alexis Tsoukiàs, volume 8176 of
Lecture Notes in Computer Science, pp. 208–220. Springer, (2013).
[17] Cynthia Huffman and Barbara E Kahn, ‘Variety for sale: mass cus-
tomization or mass confusion?’, Journal of retailing,74(4), 491–513,
(1998).
[18] Frédéric Koriche and Bruno Zanuttini, ‘Learning conditional prefer-
ence networks’, Artificial Intelligence,174(11), 685–703, (2010).
[19] Fabien Labernia, Bruno Zanuttini, Brice Mayag, Florian Yger, and Ja-
mal Atif, ‘Online learning of acyclic conditional preference networks
from noisy data’, in IEEE International Conference on Data Mining
(ICDM 2017), eds., Vijay Raghavan, Srinivas Aluru, George Karypis,
Lucio Miele, and Xindong Wu, pp. 247–256. IEEE Computer Society,
(2017).
[20] Wai Lam and Fahiem Bacchus, ‘Learning bayesian belief networks:
An approach based on the mdl principle’, Computational intelligence,
10(3), 269–293, (1994).
[21] Jérôme Lang and Jérôme Mengin, ‘The complexity of learning separa-
ble ceteris paribus preferences’, In Boutilier [3], pp. 848–853.
[22] Xudong Liu and Miroslaw Truszczynski, ‘Learning partial lexico-
graphic preference trees over combinatorial domains’, in Proceedings
of the Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI
2015), eds., Blai Bonet and Sven Koenig, pp. 1539–1545. AAAI Press,
(2015).
[23] Osman A Mian, Alexander Marx, and Jilles Vreeken, ‘Discovering
fully oriented causal networks’, in Proceedings of the AAAI Conference
on Artificial Intelligence, volume 35, pp. 8975–8982, (2021).
[24] Jorma Rissanen, ‘A universal prior for integers and estimation by min-
imum description length’, The Annals of statistics,11(2), 416–431,
(1983).
[25] Joe Suzuki, ‘A construction of bayesian networks from databases based
on an mdl principle’, in Uncertainty in Artificial Intelligence, pp. 266–
273. Elsevier, (1993).
[26] Bianca Wackersreuther, Peter Wackersreuther, Annahita Oswald, Chris-
tian Böhm, and Karsten M Borgwardt, ‘Frequent subgraph discovery in
dynamic networks’, in Proceedings of the eighth workshop on mining
and learning with graphs, pp. 155–162, (2010).
[27] Nic Wilson, ‘Consistency and constrained optimisation for conditional
preferences’, in Proceedings of the 16th Eureopean Conference on Ar-
tificial Intelligence (ECAI 2004), eds., Ramón López de Mántaras and
Lorenza Saitta, p. 888–892. IOS Press, (2004).
[28] Nic Wilson, ‘Computational techniques for a simple theory of condi-
tional preferences’, Artificial Intelligence,175, 1053–1091, (2011).
[29] Linda L Zhang, ‘Product configuration: a review of the state-of-the-
art and future research’, International Journal of Production Research,
52(21), 6381–6398, (2014).
P.-F. Gimenez and J. Mengin / Conditionally Acyclic CO-Networks for Efficient Preferential Optimization850