Article

Deep Transfer as Structure Learning in Markov Logic Networks

01/2010;

ABSTRACT Learning the relational structure of a domain is a funda-mental problem in statistical relational learning. The deep transfer algorithm of Davis and Domingos at-tempts to improve structure learning in Markov logic networks by harnessing the power of transfer learn-ing, using the second-order structural regularities of a source domain to bias the structure search process in a target domain. We propose that the clique-scoring process which discovers these second-order regularities constitutes a novel standalone method for learning the structure of Markov logic networks, and that this fact, rather than the transfer of structural knowledge across domains, accounts for much of the performance bene-fit observed via the deep transfer process. This claim is supported by experiments in which we find that clique scoring within a single domain often produces results equaling or surpassing the performance of deep transfer incorporating external knowledge, and also by explicit algorithmic similarities between deep transfer and other structure learning techniques.

0 Bookmarks
 · 
51 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a new approach to learning hypertext classifiers that combines a statistical text-learning method with a relational rule learner. This approach is well suited to learning in hypertext domains because its statistical component allows it to characterize text in terms of word frequencies, whereas its relational component is able to describe how neighboring documents are related to each other by hyperlinks that connect them. We evaluate our approach by applying it to tasks that involve learning definitions for (i) classes of pages, (ii) particular relations that exist between pairs of pages, and (iii) locating a particular class of information in the internal structure of pages. Our experiments demonstrate that this new approach is able to learn more accurate classifiers than either of its constituent methods alone.
    Machine Learning 01/2001; 43:97-119. · 1.47 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Many real-world applications of AI require both probability and first-order logic to deal with uncertainty and structural complexity. Logical AI has focused mainly on handling complexity, and statis- tical AI on handling uncertainty. Markov Logic Networks (MLNs) are a powerful representation that combine Markov Networks (MNs) and first-order logic by attaching weights to first-order formulas and viewing these as templates for features of MNs. State-of-the- art structure learning algorithms of MLNs maximize the likelihood of a relational database by performing a greedy search in the space of candidates. This can lead to suboptimal results because of the incapability of these approaches to escape local optima. Moreover, due to the combinatorially explosive space of potential candidates these methods are computationally prohibitive. We propose a novel algorithm for learning MLNs structure, based on the Iterated Lo- cal Search (ILS) metaheuristic that explores the space of structures through a biased sampling of the set of local optima. The algorithm focuses the search not on the full space of solutions but on a smaller subspace defined by the solutions that are locally optimal for the op- timization engine. We show through experiments in two real-world domains that the proposed approach improves accuracy and learning time over the existing state-of-the-art algorithms. first-order logic and probabilistic graphical models as special cases. It extends first-order logic by attaching weights to formulas provid- ing the full expressiveness of graphical models and first-order logic in finite domains and remaining well defined in many infinite do- mains (22, 25). Weighted formulas are viewed as templates for con- structing MNs and in the infinite-weight limit, Markov logic reduces to standard first-order logic. In Markov logic it is avoided the as- sumption of i.i.d. (independent and identically distributed) data made by most statistical learners by using the power of first-order logic to compactly represent dependencies among objects and relations. Learning an MLN consists in structure learning (learning the logical clauses) and weight learning (setting the weight of each clause). In (22) structure learning was performed through ILP meth- ods (13) followed by a weight learning phase in which maximum- pseudolikelihood (2) weights were learned for each learned clause. State-of-the-art algorithms for structure learning are those in (11, 16) where learning of MLNs is performed in a single step using weighted pseudo-likelihood as the evaluation measure during structure search. However, these algorithms follow systematic search strategies that can lead to local optima and prohibitive learning times. The algo- rithm in (11) performs a beam search in a greedy fashion which makes it very susceptible to local optima, while the algorithm in (16) works in a bottom-up fashion trying to consider fewer candi- dates for evaluation. Even though it considers fewer candidates, after initially scoring all candidates, this algorithm attempts to add them one by one to the MLN, thus changing the MLN at almost each step, which greatly slows down the computation of the optimal weights. Moreover, both these algorithms cannot benefit from parallel archi- tectures. We propose an approach based on the Iterated Local Search (ILS) metaheuristics that samples the set of local optima and per- forms a search in the sampled space. We show that, through a simple parallelism model such as independent multiple walk, ILS achieves improvements towards the state-of-the-art algorithms.
    ECAI 2008 - 18th European Conference on Artificial Intelligence, Patras, Greece, July 21-25, 2008, Proceedings; 01/2008
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Transfer learning addresses the problem of how to leverage knowledge acquired in a source domain to improve the ac- curacy and speed of learning in a related target domain. This paper considers transfer learning with Markov logic networks (MLNs), a powerful formalism for learning in relational do- mains. We present a complete MLN transfer system that first autonomously maps the predicates in the source MLN to the target domain and then revises the mapped structure to fur- ther improve its accuracy. Our results in several real-world domains demonstrate that our approach successfully reduces the amount of time and training data needed to learn an accu- rate model of a target domain over learning from scratch.
    Proceedings of the Twenty-Second AAAI Conference on Artificial Intelligence, July 22-26, 2007, Vancouver, British Columbia, Canada; 01/2007

Full-text

View
0 Downloads
Available from