Solving Weighted Max-SAT Problems in a Reduced Search Space: A Performance Analysis.
Ann. Math. Artif. Intell. 01/2011; 62:317-343.
Journal on Satisfiability, Boolean Modeling and Computation 4 (2008) 191-217
Solving Weighted Max-SAT Problems in a Reduced Search
Space: A Performance Analysis∗
Department of Computer Science,
University of California, Los Angeles
We analyze, in this work, the performance of a recently introduced weighted Max-SAT
solver, Clone, in the Max-SAT evaluation 2007. Clone utilizes a novel bound computation
based on formula compilation that allows it to search in a reduced search space. We study
how additional techniques from the SAT and Max-SAT literature affect the performance of
Clone on problems from the evaluation. We then perform further investigations on factors
that may affect the performance of leading Max-SAT solvers. We empirically identify two
properties of weighted Max-SAT problems that can be used to adjust the difficulty level of
the problems with respect to the considered solvers.
Keywords: Max-SAT, constraint relaxation, lower bound computation
Submitted October 2007; revised February 2008; published June 2008
1. Introduction and Background
The maximum satisfiability problem (Max-SAT) is one of the optimization counterparts of
the Boolean satisfiability problem (SAT). In Max-SAT, given a Boolean formula in con-
junctive normal form (CNF), we want to determine the maximum number of clauses that
can be satisfied by any complete assignment, where a clause is a disjunction of literals
and a literal is simply a variable or its negation. Recently, the study of Max-SAT has
been growing in popularity, as demonstrated by the quickly increasing number of Max-SAT
solvers [20, 29, 24, 39, 1, 5]. The Max-SAT problem has also been used as a model for many
applications in areas such as databases , FPGA routing , and automatic schedul-
ing . The annual Max-SAT evaluation has played an important role in advancing this
field of study [2, 4].
Two important variations of the Max-SAT problem are the weighted Max-SAT and the
partial Max-SAT problems. The weighted Max-SAT problem is the Max-SAT problem, in
which each clause is assigned a positive weight. The objective of this problem is to maximize
the sum of weights of satisfied clauses by any assignment. The partial Max-SAT problem
is the Max-SAT problem, in which some clauses cannot be left falsified by any solution. In
∗ This work extends our previous work in 
c ?2008 Delft University of Technology and the authors.
K. Pipatsrisawat et al.
practice, a clause that cannot be falsified is represented by a clause with a sufficiently large
weight. The combination of both variations is called the weighted partial Max-SAT problem.
For the rest of this paper, we use the term Max-SAT to refer to any variation of the Max-
There are two main approaches used by contemporary exact Max-SAT solvers: the
satisfiability-based approach and the branch-and-bound approach. The former converts
each Max-SAT problem with different hypothesized maximum weights into multiple SAT
problems and uses a SAT solver to solve these SAT problems to determine the actual solu-
tion. Examples of this type of solver are ChaffBS, ChaffLS  and SAT4J-MaxSAT .
The second approach, which seems to dominate in terms of performance based on recent
Max-SAT evaluations [2, 4], utilizes a depth-first branch-and-bound search in the space of
possible assignments. An evaluation function which computes a bound is applied at each
search node to determine any pruning opportunity.
The methods used to compute bounds vary among branch-and-bound solvers and often
give rise to difference in performance. Toolbar utilizes local consistencies to aid bound
computations [24, 15]. Lazy, MaxSatz, PMS, LB-SAT, and MiniMaxSAT compute bounds
using some variations of unit propagation and disjoint component detection [1, 27, 26, 5,
29, 20]. Moreover, solvers such as MiniMaxSAT and PMS also use Max-SAT inference rules
to improve bound quality.1.
Our solver, Clone, uses a completely different approach for computing bounds. Clone
compiles a relaxed version of the Max-SAT problem into a tractable form and computes
bounds from the compiled formula. This approach allows our solver to better take advantage
of problem structure. Moreover, it can be thought of as an approach that combines search
and compilation together. Note that the Max-SAT solver Sr(w) by Ram´ ırez and Geffner 
uses the same approach for bound computation. Both solvers were developed independently
and both participated in the Max-SAT evaluation 2007 .
In this work, we experimented with additional techniques for improving our Max-SAT
solver and evaluated their impact on problems from the Max-SAT evaluation. We analyzed
the performance of our solver and performed further experiments that revealed a class of
problems on which our solver significantly outperformed other Max-SAT solvers. This result
demonstrates the benefits of our approach. Our investigation also led us to identify some
properties of Max-SAT problems that can be used to indicate their difficulty. We report
empirical results that show how these properties can be manipulated for different difficulty
levels of Max-SAT problems.
In the next section, we discuss the preprocessor of Clone and our approach for computing
bounds. In Section 3, we describe the search component and the inference techniques used
in Clone. Evaluation of our solver on problems from the Max-SAT evaluation is presented in
Section 4. In Section 5, we carefully analyze the performance of our solver and discuss some
properties of problems used in the evaluation. In Section 6, we present a series of results
from our additional investigations that allow us to identify some properties of weighted
Max-SAT problems that are good indicators of solvers’ performance. Finally, we conclude
with some remarks in Section 7.
1. Local consistency and Max-SAT inference are two highly-related concepts (see ).
Solving Weighted Max-SAT Problems in a Reduced Search Space
2. Bound Computation
In the literature, the Max-SAT problem is often viewed as the problem of minimizing the
costs (or weights) of falsified clauses of any assignment. We will follow this interpretation
and use the term cost to refer to the sum of weights of clauses that are not satisfied. More-
over, we will use UB (upper bound) to denote the best cost of any complete assignment found
so far and LB (lower bound) to denote the lower bound on the cost of any assignment that
extends the current partial assignment. Branch-and-bound search algorithm can prune all
children of a node whenever LB ≥ UB.
To compute lower bounds, we take advantage of a tractable language called deterministic
decomposable negation normal form (d-DNNF) [13, 12]. The key property of d-DNNF that
we utilize here is the fact that, for each conjunction in a d-DNNF formula, the conjuncts
share no variable. This property is called decomposability . Many useful queries can
be answered about sentences in d-DNNF in time linear in the size of these sentences. One
of these queries is (weighted) minimum cardinality, which is similar to Max-SAT, except
that weights are associated with variables instead of clauses. Our approach is indeed based
on reducing Max-SAT on the given CNF to minimum cardinality on a d-DNNF equivalent
of the CNF. If this compilation is successful, the Max-SAT problem is solved immediately
since minimum cardinality can be solved in time linear in the d-DNNF size. Unfortunately,
however, the compilation process is often difficult. Our solution to this problem is then
to compile a relaxation of the original CNF. The relaxed CNF is generated carefully to
make the compilation process feasible. The price we pay for this relaxation is that solving
minimum cardinality on the resulting d-DNNF (of the relaxed CNF) will give lower bounds
instead of exact solutions. Our approach is then to use these lower bounds for pruning in
our branch-and-bound search.
We show how a Max-SAT problem can be reduced to a minimum cardinality problem in
Section 2.1. We will then discuss problem relaxation in Section 2.2, followed by compilation
in Section 2.3. A method for computing bounds from the compiled formula is discussed in
2.1 Reducing Max-SAT to Minimum Cardinality
Given a CNF formula and a cost for each literal of the formula, the weighted minimum
cardinality problem asks for a satisfying assignment that costs the least. This problem is
also known as the MinCostSAT problem  and the binate covering problem . The cost
of an assignment is the sum of the costs of all literals that it sets to true. To reduce a
Max-SAT problem into a minimum cardinality problem, we introduce a distinct selector
variable to each clause of the Max-SAT problem and assign the clause’s cost to the positive
literal of the selector variable . All other literals are assigned zero cost. For example,
the clause C = (a∨b∨c) becomes C′= (s∨a∨b∨c) after the selector variable s is added.
If C originally had cost w associated with it, then w is assigned to s and any assignment
that set s = true will incur this cost. After this conversion, the formula will be trivially
satisfiable, because every clause contains a distinct selector variable. Nevertheless, finding a
satisfying assignment with the lowest cost is not easy. The minimum cardinality problem is
NP-hard for CNF formulas. However, it can be solved efficiently once we have the formula
K. Pipatsrisawat et al.
in d-DNNF. Any solution to this problem can be converted back to a solution for the original
Max-SAT problem by ignoring assignments of the selector variables.
At this point, we are almost ready to compile the CNF formula. The only remaining
issue is the time complexity of the compilation, which is, in the worst case, exponential in
the treewidth of the constraint graph  of the CNF formula. The treewidth of a graph
is a theoretic parameter, which measures the extent to which a graph resembles a tree (the
lower the treewidth, the more tree-like) . In most cases, straight compilation will be
impractical. As a result, we need to relax the formula to lower its treewidth.
2.2 Problem Relaxation by Variable Splitting
The approach we use to relax the problem is called variable splitting, which was inspired
by the work of Choi et al in . In general, splitting a variable v involves introducing
new variables for all but one occurrence of v in the original CNF formula.2.For example,
splitting a in the CNF (a∨b)∧(¬a∨c)∧(a∨d)∧(b∨¬c)∧(c∨¬d) results in the formula
(a∨b)∧(¬a1∨c)∧(a2∨d)∧(b∨¬c)∧(c∨¬d). In this case, a is called the split variable. The
new variables (a1and a2in this case) are called the clones of the split variable. Figure 1
illustrates the constraint graph of the above CNF before and after the split.
Figure 1. (left) The constraint graph of (a∨b)∧(¬a∨c)∧(a∨d)∧(b∨¬c)∧(c∨¬d). (right)
The constraint graph after splitting a. The treewidth is reduced from 2 to 1.
After splitting, the resulting problem becomes a relaxation of the original problem,
because any assignment in the original problem has an assignment in the split problem
with the same cost. Such an assignment can be obtained by setting the value of every
clone according to its split variable. Therefore, the lowest cost of any split formula is a
lower bound of the lowest cost of the original formula. The strategy we use for selecting
split variables is the same as the one described in . Identifying variables to split is closely
related to the problem of finding a loop cutset (or cycle cutset) [37, 45], except that we do
not necessarily insist on splitting variables until the constraint graph becomes a tree.
2.3 CNF to d-DNNF Compilation
Once the problem has a sufficiently low treewidth, it can be practically compiled. The
process of compiling a CNF formula into a d-DNNF formula is performed by a program
called C2D [12, 10]. C2D takes a CNF formula as input and produces an equivalent formula
2. This type of splitting is called full splitting. While other degrees of splitting are possible, we focus our
attention only to this method.
Solving Weighted Max-SAT Problems in a Reduced Search Space
in d-DNNF. The output formula is fed to the search engine and will be used for later bound
¬ c (2) b (1)¬ b (2)c (1)
AND (2) AND (4)AND (3) AND (3)
a (1) OR (2)¬ a (2)OR (3)
AND (3) AND (5)
Figure 2. The DAG of the d-DNNF formula(a∧((b∧c)∨(¬b∧¬c)))∨(¬a∧((b∨¬c)∨(¬b∨c))).
Each node in this graph is also labeled with the value used to compute the minimum cardinality of
2.4 Computing Bounds from d-DNNF
Every d-DNNF formula can be represented as a rooted DAG. Each node in the DAG is
either a Boolean constant, a literal, or a logical operator (conjunction or disjunction). The
root of the DAG corresponds to the formula. For example, consider the DAG of a d-DNNF
formula (a ∧ ((b ∧ c) ∨ (¬b ∧ ¬c))) ∨ (¬a ∧ ((b ∨ ¬c) ∨ (¬b ∨ c))) in Figure 2. In this figure,
the cost of every positive literal is set to 1 and the cost of every negative literal is set to
2. The minimum cardinality of the formula is simply the value of the root node, which is
defined recursively as :
1. The value of a literal node is the value of the literal
2. The value of an AND node is the sum of the values of all its children
3. The value of an OR node is the minimum of the values of its children
Note that Step 2 is possible because the formula satisfies decomposability. If the formula
is a relaxed formula, then the computed minimum cardinality becomes a lower bound of
the minimum cardinality of the formula before relaxation (hence a lower bound of the
optimal cost of the original Max-SAT problem). The d-DNNF formula can also be efficiently
conditioned on any partial or complete assignment of its variables. Conditioning only affects
the values of the nodes whose literals are set to false. False literals can no longer contribute
to any solution of the problem. Hence, the values of such nodes are set to ∞, which may
in turn affect the values of their parents or ancestors. The resulting bound computed from