Article

Computing Posterior Probabilities of Structural Features in Bayesian Networks

05/2012;
Source: arXiv

ABSTRACT We study the problem of learning Bayesian network structures from data.
Koivisto and Sood (2004) and Koivisto (2006) presented algorithms that can
compute the exact marginal posterior probability of a subnetwork, e.g., a
single edge, in O(n2n) time and the posterior probabilities for all n(n-1)
potential edges in O(n2n) total time, assuming that the number of parents per
node or the indegree is bounded by a constant. One main drawback of their
algorithms is the requirement of a special structure prior that is non uniform
and does not respect Markov equivalence. In this paper, we develop an algorithm
that can compute the exact posterior probability of a subnetwork in O(3n) time
and the posterior probabilities for all n(n-1) potential edges in O(n3n) total
time. Our algorithm also assumes a bounded indegree but allows general
structure priors. We demonstrate the applicability of the algorithm on several
data sets with up to 20 variables.

0 Bookmarks
 · 
69 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Constraint-based causal discovery algorithms use conditional independence tests to identify the skeleton and invariant orientations of a causal network. Two major disadvantages of constraint-based methods are that (a) they are sensitive to error propagation and (b) the results of the conditional independence tests are binarized by being compared to a hard threshold; thus, the resulting networks are not easily evaluated in terms of reliability. We present PROPeR, a method for estimating poste-rior probabilities of pairwise relations (adjacencies and non-adjacencies) of a network skeleton as a function of the corresponding p-values. This novel approach has no significant computational overhead and can scale up to the same number of variables as the constraint-based algorithm of choice. We also present BiND, an algorithm that identifies neighborhoods of high structural confidence on causal networks learnt with constraint-based algorithms. The algorithm uses PROPeR to estimate the confi-dence of all pairwise relations. Maximal neighborhoods of the skeleton with minimum confidence above a user-defined threshold are then identi-fied using the Bron-Kerbosch algorithm for identifying maximal cliques. In our empirical evaluation, we demonstrate that (a) the posterior prob-ability estimates for pairwise relations are reasonable and comparable with estimates obtained using more expensive Bayesian methods and (b) BiND identifies sub-networks with higher structural precision and recall than the output of the constraint-based algorithm.
    Seventh European Workshop on Probabilistic Graphical Models (PGM); 01/2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We study the problem of learning Bayesian network structures from data. We develop an algorithm for finding the k-best Bayesian network structures. We propose to compute the posterior probabilities of hypotheses of interest by Bayesian model averaging over the k-best Bayesian networks. We present empirical results on structural discovery over several real and synthetic data sets and show that the method outperforms the model selection method and the state of-the-art MCMC methods.
    03/2012;
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present a new sampling approach to Bayesian learning of the Bayesian network structure. Like some earlier sampling methods, we sample linear orders on nodes rather than directed acyclic graphs (DAGs). The key difference is that we replace the usual Markov chain Monte Carlo (MCMC) method by the method of annealed importance sampling (AIS). We show that AIS is not only competitive to MCMC in exploring the posterior, but also superior to MCMC in two ways: it enables easy and efficient parallelization, due to the independence of the samples, and lower-bounding of the marginal likelihood of the model with good probabilistic guarantees. We also provide a principled way to correct the bias due to order-based sampling, by implementing a fast algorithm for counting the linear extensions of a given partial order.
    Proceedings of the Twenty-Third international joint conference on Artificial Intelligence; 08/2013

Full-text

Download
1 Download
Available from