Computing Posterior Probabilities of Structural Features in Bayesian Networks

Source: arXiv


We study the problem of learning Bayesian network structures from data.
Koivisto and Sood (2004) and Koivisto (2006) presented algorithms that can
compute the exact marginal posterior probability of a subnetwork, e.g., a
single edge, in O(n2n) time and the posterior probabilities for all n(n-1)
potential edges in O(n2n) total time, assuming that the number of parents per
node or the indegree is bounded by a constant. One main drawback of their
algorithms is the requirement of a special structure prior that is non uniform
and does not respect Markov equivalence. In this paper, we develop an algorithm
that can compute the exact posterior probability of a subnetwork in O(3n) time
and the posterior probabilities for all n(n-1) potential edges in O(n3n) total
time. Our algorithm also assumes a bounded indegree but allows general
structure priors. We demonstrate the applicability of the algorithm on several
data sets with up to 20 variables.

Download full-text


Available from: Ru He, Mar 26, 2015
6 Reads
  • Source
    • "(7) and (8). To get an idea on how close the estimation is to the true posteriors we used the algorithm in [16] to compute the exact P (D). "
    [Show abstract] [Hide abstract]
    ABSTRACT: We study the problem of learning Bayesian network structures from data. We develop an algorithm for finding the k-best Bayesian network structures. We propose to compute the posterior probabilities of hypotheses of interest by Bayesian model averaging over the k-best Bayesian networks. We present empirical results on structural discovery over several real and synthetic data sets and show that the method outperforms the model selection method and the state of-the-art MCMC methods.
  • Source
    • "Likewise, the posterior probability of an arbitrary fixed arc set can be computed by analogous dynamic programming techniques in time and space within a polynomial factor of 2 n (Koivisto and Sood, 2004; Koivisto, 2006). Interestingly, these results rely on the assumption that the prior distribution on DAGs obeys a special structure, which deviates, for example, from the simplest uniform distribution; if adhering to the uniform distribution, the fastest known algorithm is substantially slower, taking time O(3 n ) and space O(2 n ) (Tian and He, 2009). The space requirement is again the bottleneck in the former algorithms, but not in the latter. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The fastest known exact algorithms for score- based structure discovery in Bayesian net- works on n nodes run in time and space 2nnO(1). The usage of these algorithms is limited to networks on at most around 25 nodes mainly due to the space requirement. Here, we study space-time tradeoffs for find- ing an optimal network structure. When lit- tle space is available, we apply the Gurevich- Shelah recurrence—originally proposed for the Hamiltonian path problem—and obtain
    UAI 2009, Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, Montreal, QC, Canada, June 18-21, 2009; 01/2009
  • Source
    UAI 2010, Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence, Catalina Island, CA, USA, July 8-11, 2010; 01/2010
Show more