Conference Paper

A Dynamic Approach for MPE and Weighted MAX-SAT.

Conference: IJCAI 2007, Proceedings of the 20th International Joint Conference on Artificial Intelligence, Hyderabad, India, January 6-12, 2007
Source: DBLP

ABSTRACT The problem of Most Probable Explanation (MPE) arises in the scenario of probabilistic inference: finding an assignment to all variables that has the maximum likelihood given some evidence. We consider the more general CNF-based MPE prob- lem, where each literal in a CNF-formula is asso- ciated with a weight. We describe reductions be- tween MPE and weighted MAX-SAT, and show that both can be solved by a variant of weighted model counting. The MPE-SAT algorithm is quite competitive with the state-of-the-art MAX-SAT, WCSP, and MPE solvers on a variety of problems.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this article we investigate the use of portfolios (or collections) of heuristics when solving computa- tionally hard problems using stochastic local search. We consider uncertainty reasoning, specifically the computation of most probable explanations in Bayesian networks (BNs). Our contribution is two-fold. First, we introduce a portfolio-based stochastic local search approach that utilizes an initialization port- folio and a search portfolio. Second, and leveraging our portfolio approach, we investigate determinism in BNs. We introduce a novel additive measure of gain (or gradient), which is tailored to partly deter- ministic Bayesian networks, and show how it generalizes the MAXSAT measure of gain from stochastic local search for the satisfiability (SAT) problem. This measure of gain provides, along with an explana- tion's gain in probability and their respective noisy variants, different heuristics that locally improve an explanation. Our approach is implemented in a stochastic local search system, Stochastic Greedy Search. Stochas- tic Greedy Search is here compared to the state-of-the-art inference system H����, which performs BN inference by compilation to and propagation in clique trees. We report on experiments using partly deter- ministic Bayesian networks from applications as well as synthetically generated networks. On synthetic networks, Stochastic Greedy Search speeds up computation approximately two orders of magnitude com- pared to H����. In application networks, our approach is highly competitive in Bayesian networks with a high degree of determinism. In addition to showing that stochastic local search can be competitive with clique tree clustering, our empirical results providean improved understanding of the circumstances under which portfolio-based stochastic local search outperforms clique tree clustering and vice versa. While our focus here is not on adaptation or learning during search, we believe that our framework and results can in turn enable innovations in those areas, thereby further enhancing stochastic local search algorithms.
    Journal of Automated Reasoning 01/2011; 46:103-160. · 0.57 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The satisfiability problem (SAT), as one of the six basic core NP-complete problems, has been the deserving object of many studies in the last two decades (Lardeux et al. 2005, 2006). GASAT (Lardeux et al. 2005, 2006; Hao et al. 2002) is one of the current state-of-the-art genetic algorithms for solving SATs. Besides, the discrete lagrange-multiplier (DLM) (Wu and Wah 1999a, b) is one of the current state-of-the-art local search algorithms for solving SATs. GASAT is a hybrid algorithm of the genetic and tabu search techniques. GASAT uses tabu search to avoid restarting the search once it converges. In this paper, we improve GASAT by replacing the tabu search by the DLM algorithm. We show that the performance of the new algorithm, DGASAT, is far better than the performance of GASAT in solving most of the benchmark instances. We further improve DGASAT by introducing the notion of improving one of the best members in the current population at a time. We show through experimentation that DGASAT + is far better than DGASAT in solving nearly all the benchmark instances.
    Artificial Intelligence Review 01/2010; 33:41-59. · 1.57 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The problem of learning discrete Bayesian networks from data is encoded as a weighted MAX-SAT problem and the MaxWalkSat local search algorithm is used to address it. For each dataset, the per-variable summands of the (BDeu) marginal likelihood for different choices of parents ('family scores') are computed prior to applying MaxWalkSat. Each permissible choice of parents for each variable is encoded as a distinct propositional atom and the associated family score encoded as a 'soft' weighted single-literal clause. Two approaches to enforcing acyclicity are considered: either by encoding the ancestor relation or by attaching a total order to each graph and encoding that. The latter approach gives better results. Learning experiments have been conducted on 21 synthetic datasets sampled from 7 BNs. The largest dataset has 10,000 datapoints and 60 variables producing (for the 'ancestor' encoding) a weighted CNF input file with 19,932 atoms and 269,367 clauses. For most datasets, MaxWalkSat quickly finds BNs with higher BDeu score than the 'true' BN. The effect of adding prior information is assessed. It is further shown that Bayesian model averaging can be effected by collecting BNs generated during the search.


Available from