Conference Paper

Tractable reasoning in probabilistic OWL profiles

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Through the advance of information extraction and data mining, a number of knowledge bases (KBs) have been created, for instance, NELL and Google knowledge Vault. In line with this, probabilistic extensions of various description logics have been proposed for reasoning in probabilistic KBs. However, most of these languages are not tractable impeding their practical use. Since present-day KBs can be very large, tractable reasoning is essential. In this work, we propose probabilistic extensions of OWL 2 RL and OWL 2 EL by using probabilistic soft logic for which inference is known to be tractable. We show that inference in probabilistic extensions of OWL 2 RL and OWL 2 EL remains tractable. We present experimental results over a YAGO KB that contains hundreds of schema axioms and thousands of instances.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Whereas people learn many different types of knowledge from diverse experiences over many years, and become better learners over time, most current machine learning systems are much more narrow, learning just a single function or data model based on statistical analysis of a single data set. We suggest that people learn better than computers precisely because of this difference, and we suggest a key direction for machine learning research is to develop software architectures that enable intelligent agents to also learn many types of knowledge, continuously over many years, and to become better learners over time. In this paper we define more precisely this never-ending learning paradigm for machine learning, and we present one case study: the Never-Ending Language Learner (NELL), which achieves a number of the desired properties of a never-ending learner. NELL has been learning to read the Web 24hrs/day since January 2010, and so far has acquired a knowledge base with 120mn diverse, confidence-weighted beliefs (e.g., servedWith(tea,biscuits)), while learning thousands of interrelated functions that continually improve its reading competence over time. NELL has also learned to reason over its knowledge base to infer new beliefs it has not yet read from those it has, and NELL is inventing new relational predicates to extend the ontology it uses to represent beliefs. We describe the design of NELL, experimental results illustrating its behavior, and discuss both its successes and shortcomings as a case study in never-ending learning. NELL can be tracked online at http://rtw.ml.cmu.edu, and followed on Twitter at @CMUNELL.
Article
Full-text available
We propose a family of probabilistic description logics (DLs) that are derived in a principled way from Halpern's probabilistic first-order logic. The resulting probabilistic DLs have a two-dimensional semantics similar to temporal DLs and are well-suited for representing subjective probabilities. We carry out a detailed study of reasoning in the new family of logics, concentrating on probabilistic extensions of the DLs ALC and EL, and showing that the complexity ranges from PTime via ExpTime and 2ExpTime to undecidable.
Conference Paper
Full-text available
Whereas people learn many different types of knowledge from diverse experiences over many years, most current machine learning systems acquire just a single function or data model from just a single data set. We propose a never-ending learning paradigm for machine learning, to better reflect the more ambitious and encompassing type of learning performed by humans. As a case study, we describe the Never-Ending Language Learner (NELL), which achieves some of the desired properties of a never-ending learner, and we discuss lessons learned. NELL has been learning to read the web 24 hours/day since January 2010, and so far has acquired a knowledge base with over 80 million confidence-weighted beliefs (e.g., servedWith(tea, biscuits)). NELL has also learned millions of features and parameters that enable it to read these beliefs from the web. Additionally, it has learned to reason over these beliefs to infer new beliefs, and is able to extend its ontology by synthesizing new relational predicates. NELL can be tracked online at http://rtw.ml.cmu.edu, and followed on Twitter at @CMUNELL.
Article
Full-text available
Populating a database with information from unstructured sources—also known as knowledge base construction (KBC)—is a long-standing problem in industry and research that encompasses problems of extraction, cleaning, and integration. In this work, we describe DeepDive, a system that combines database and machine learning ideas to help develop KBC systems, and we present techniques to make the KBC process more efficient. We observe that the KBC process is iterative, and we develop techniques to incrementally produce inference results for KBC systems. We propose two methods for incremental inference, based, respectively, on sampling and variational techniques. We also study the trade-off space of these methods and develop a simple rule-based optimizer. DeepDive includes all of these contributions, and we evaluate DeepDive on five KBC systems, showing that it can speed up KBC inference tasks by up to two orders of magnitude with negligible impact on quality.
Article
Full-text available
Populating a database with unstructured information is a long-standing problem in industry and research that encompasses problems of extraction, cleaning, and integration. A recent name used to characterize this problem is knowledge base construction (KBC). In this work, we describe DeepDive, a system that combines database and machine learning ideas to help develop KBC systems, and we present techniques to make the KBC process more efficient. We observe that the KBC process is iterative, and we develop techniques to incrementally produce inference results for KBC systems. We propose two methods for incremental inference, based respectively on sampling and variational techniques. We also study the tradeoff space of these methods and develop a simple rule-based optimizer. DeepDive includes all of these contributions, and we evaluate DeepDive on five KBC systems, showing that it can speed up KBC inference tasks by up to two orders of magnitude with negligible impact on quality.
Article
Full-text available
Tractable subsets of first-order logic are a central topic in AI research. Several of these formalisms have been used as the basis for first-order probabilistic languages. However, these are intractable, losing the original moti-vation. Here we propose the first non-trivially tractable first-order probabilistic language. It is a subset of Markov logic, and uses probabilistic class and part hier-archies to control complexity. We call it TML (Tractable Markov Logic). We show that TML knowledge bases allow for efficient inference even when the correspond-ing graphical models have very high treewidth. We also show how probabilistic inheritance, default reasoning, and other inference patterns can be carried out in TML. TML opens up the prospect of efficient large-scale first-order probabilistic inference.
Article
Full-text available
The Semantic Web effort has steadily been gaining traction in the recent years. In particular,Web search companies are recently realizing that their products need to evolve towards having richer semantic search capabilities. Description logics (DLs) have been adopted as the formal underpinnings for Semantic Web languages used in describing ontologies. Reasoning under uncertainty has recently taken a leading role in this arena, given the nature of data found on theWeb. In this paper, we present a probabilistic extension of the DL EL++ (which underlies the OWL2 EL profile) using Markov logic networks (MLNs) as probabilistic semantics. This extension is tightly coupled, meaning that probabilistic annotations in formulas can refer to objects in the ontology. We show that, even though the tightly coupled nature of our language means that many basic operations are data-intractable, we can leverage a sublanguage of MLNs that allows to rank the atomic consequences of an ontology relative to their probability values (called ranking queries) even when these values are not fully computed. We present an anytime algorithm to answer ranking queries, and provide an upper bound on the error that it incurs, as well as a criterion to decide when results are guaranteed to be correct.
Conference Paper
Full-text available
DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human- and machine-consumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data.
Conference Paper
Full-text available
We propose a new family of probabilistic description logics (DLs) that, in contrast to most existing approaches, are derived in a principled way from Halpern’s probabilistic firstorder logic. The resulting probabilistic DLs have a twodimensional semantics similar to certain popular combinations of DLs with temporal logic and are well-suited for capturing subjective probabilities. Our main contribution is a detailed study of the complexity of reasoning in the new family of probabilistic DLs, showing that it ranges from PTIME for weak variants based on the lightweight DL EL to undecidable for some expressive variants based on the DL ALC.
Conference Paper
Full-text available
Log-linear description logics are probabilistic logics combining several concepts and methods from the areas of knowledge representation and reasoning and statistical relational AI. We describe some of the implementation details of the log-linear reasoner ELOG. The reasoner employs database technology to dynamically transform inference problems to integer linear programs (ILP). In order to lower the size of the ILPs and reduce the complexity we employ a form of cutting plane inference during reasoning.
Conference Paper
Full-text available
We present YAGO2, an extension of the YAGO knowledge base with focus on temporal and spatial knowledge. It is automatically built from Wikipedia, GeoNames, and WordNet, and contains nearly 10 million entities and events, as well as 80 million facts representing general world knowledge. An enhanced data representation introduces time and location as first-class citizens. The wealth of spatio-temporal information in YAGO can be explored either graphically or through a special time- and space-aware query language.
Conference Paper
Full-text available
The DL-Lite family of tractable description logics lies between the semantic web languages RDFS and OWL Lite. In this paper, we present a probabilistic generalization of the DL-Lite description logics, which is based on Bayesian networks. As an important feature, the new probabilistic description logics allow for flexibly combining terminological and assertional pieces of probabilistic knowledge. We show that the new probabilistic description logics are rich enough to properly extend both the DL-Lite description logics as well as Bayesian networks. We also show that satisfiability checking and query processing in the new probabilistic description logics is reducible to satisfiability checking and query processing in the DL-Lite family. Furthermore, we show that satisfiability checking and answering unions of conjunctive queries in the new logics can be done in LogSpace in the data complexity. For this reason, the new probabilistic description logics are very promising formalisms for data-intensive applications in the Semantic Web involving probabilistic uncertainty.
Conference Paper
Full-text available
Recently, it has been shown that the small descrip- tion logic (DL)EL, which allows for conjunction and existential restrictions, has better algorithmic properties than its counterpart FL0, which allows for conjunction and value restrictions. Whereas the subsumption problem inFL0 becomes already in- tractable in the presence of acyclic TBoxes, it re- mains tractable in EL even with general concept inclusion axioms (GCIs). On the one hand, we ex- tend the positive result forEL by identifying a set of expressive means that can be added toEL with- out sacrificing tractability. On the other hand, we show that basically all other additions of typical DL constructors toEL with GCIs make subsump- tion intractable, and in most cases even E XPTIME- complete. In addition, we show that subsumption inFL0 with GCIs is EXPTIME-complete.
Conference Paper
Full-text available
Log-linear description logics are a family of probabilistic logics integrating various concepts and methods from the areas of knowledge representation and reasoning and statistical relational AI. We define the syntax and semantics of log-linear description logics, describe a convenient representation as sets of first-order formulas, and discuss computational and algorithmic aspects of probabilistic queries in the language. The paper concludes with an experimental evaluation of an implementation of a log-linear DL reasoner.
Article
Full-text available
The past few years have seen a surge of interest in the field of probabilistic logic learning and statistical relational learning. In this endeavor, many probabilistic logics have been developed. ProbLog is a recent probabilistic extension of Prolog motivated by the mining of large biological networks. In ProbLog, facts can be labeled with probabilities. These facts are treated as mutually independent random variables that indicate whether these facts belong to a randomly sampled program. Different kinds of queries can be posed to ProbLog programs. We introduce algorithms that allow the efficient execution of these queries, discuss their implementation on top of the YAP-Prolog system, and evaluate their performance in the context of large networks of biological entities.
Article
Because many artificial intelligence applications require the ability to reason with uncertain knowledge, it is important to seek appropriate generalizations of logic for that case. We present here a semantical generalization of logic in which the truth values of sentences are probability values (between 0 and 1). Our generalization applies to any logical system for which the consistency of a finite set of sentences can be established. The method described in the present paper combines logic with probability theory in such a way that probabilistic logical entailment reduces to ordinary logical entailment when the probabilities of all sentences are either 0 or 1.
Article
We consider two approaches to giving semantics to first-order logics of probability. The first approach puts a probability on the domain, and is appropriate for giving semantics to formulas involving statistical information such as “The probability that a randomly chosen bird flies is greater than 0.9.” The second approach puts a probability on possible worlds, and is appropriate for giving semantics to formulas describing degrees of belief such as “The probability that Tweety (a particular bird) flies is greater than 0.9.” We show that the two approaches can be easily combined, allowing us to reason in a straightforward way about statistical information and degrees of belief. We then consider axiomatizing these logics. In general, it can be shown that no complete axiomatization is possible. We provide axiom systems that are sound and complete in cases where a complete axiomatization is possible, showing that they do allow us to capture a great deal of interesting reasoning about probability.
Conference Paper
Probabilistic OWL (PR-OWL) improves the Web Ontology Language (OWL) with the ability to treat uncertainty using Multi-Entity Bayesian Networks (MEBN). PR-OWL 2 presents a better integration with OWL and its underlying logic, allowing the creation of ontologies with probabilistic and deterministic parts. However, there are scalability problems since PR-OWL 2 is built upon OWL 2 DL which is a version of OWL based on description logic SROIQ(D) and with high complexity. To address this issue, this paper proposes PR-OWL 2 RL, a scalable version of PR-OWL based on OWL 2 RL profile and triplestores (databases based on RDF triples). OWL 2 RL allows reasoning in polynomial time for the main reasoning tasks. This paper also presents First-Order expressions accepted by this new language and analyzes its expressive power. A comparison with the previous language presents which kinds of problems are more suitable for each version of PR-OWL.
Article
This paper introduces hinge-loss Markov random fields (HL-MRFs), a new class of probabilistic graphical models particularly well-suited to large-scale structured prediction and learning. We derive HL-MRFs by unifying and then generalizing three different approaches to scalable inference in structured models: (1) randomized algorithms for MAX SAT, (2) local consistency relaxation for Markov random fields, and (3) reasoning about continuous information with fuzzy logic. To make HL-MRFs easy to construct and use, we next present probabilistic soft logic (PSL), a new probabilistic programming language for defining HL-MRFs for relational data. We then introduce a convex optimization algorithm based on message passing for exact MAP inference in HL-MRFs, as well as algorithms for weight learning.
Conference Paper
We introduce the probabilistic description logic ℬℰℒ. In ℬℰℒ, axioms are required to hold only in an associated context. The probabilistic component of the logic is given by a Bayesian network that describes the joint probability distribution of the contexts. We study the main reasoning problems in this logic; in particular, we (i) prove that deciding positive and almost-sure entailments is not harder for ℬℰℒ than for the BN, and (ii) show how to compute the probability, and the most likely context for a consequence.
Article
Recent years have witnessed a proliferation of large-scale knowledge bases, including Wikipedia, Freebase, YAGO, Microsoft's Satori, and Google's Knowledge Graph. To increase the scale even further, we need to explore automatic methods for constructing knowledge bases. Previous approaches have primarily focused on text-based extraction, which can be very noisy. Here we introduce Knowledge Vault, a Web-scale probabilistic knowledge base that combines extractions from Web content (obtained via analysis of text, tabular data, page structure, and human annotations) with prior knowledge derived from existing knowledge repositories. We employ supervised machine learning methods for fusing these distinct information sources. The Knowledge Vault is substantially bigger than any previously published structured knowledge repository, and features a probabilistic inference system that computes calibrated probabilities of fact correctness. We report the results of multiple studies that explore the relative utility of the different information sources and extraction methods.
Article
This chapter gives an extended introduction to the lightweight pro-files OWL EL, OWL QL, and OWL RL of the Web Ontology Language OWL. The three ontology language standards are sublanguages of OWL DL that are restricted in ways that significantly simplify ontological reasoning. Compared to OWL DL as a whole, reasoning algorithms for the OWL profiles show higher per-formance, are easier to implement, and can scale to larger amounts of data. Since ontological reasoning is of great importance for designing and deploying OWL ontologies, the profiles are highly attractive for many applications. These advan-tages come at a price: various modelling features of OWL are not available in all or some of the OWL profiles. Moreover, the profiles are mutually incomparable in the sense that each of them offers a combination of features that is available in none of the others. This chapter provides an overview of these differences and explains why some of them are essential to retain the desired properties. To this end, we recall the relationship between OWL and description logics (DLs), and show how each of the profiles is typically treated in reasoning algorithms.
Article
\(\mathcal {E} \mathcal {L}\) is a simple tractable Description Logic that features conjunctions and existential restrictions. Due to its favorable computational properties and relevance to existing ontologies, \(\mathcal {E} \mathcal {L}\) has become the language of choice for terminological reasoning in biomedical applications, and has formed the basis of the OWL EL profile of the Web ontology language OWL. This paper describes ELK—a high performance reasoner for OWL EL ontologies—and details various aspects from theory to implementation that make ELK one of the most competitive reasoning systems for \(\mathcal {E} \mathcal {L}\) ontologies available today.
Conference Paper
We propose a framework for querying probabilistic instance data in the presence of an OWL2 QL ontology, arguing that the interplay of probabilities and ontologies is fruitful in many applications such as managing data that was extracted from the web. The prime inference problem is computing answer probabilities, and it can be implemented using standard probabilistic database systems. We establish a PTime vs. #P dichotomy for the data complexity of this problem by lifting a corresponding result from probabilistic databases. We also demonstrate that query rewriting (backwards chaining) is an important tool for our framework, show that non-existence of a rewriting into first-order logic implies #P-hardness, and briefly discuss approximation of answer probabilities.
Article
The recently introduced Datalog+ / − family of ontology languages is especially useful for representing and reasoning over lightweight ontologies, and is set to play a central role in the context of query answering and information extraction for the Semantic Web. Recently, it has become apparent that it is necessary to develop a principled way to handle uncertainty in this domain. In addition to uncertainty as an inherent aspect of the Web, one must also deal with forms of uncertainty due to inconsistency and incompleteness, uncertainty resulting from automatically processing Web data, as well as uncertainty stemming from the integration of multiple heterogeneous data sources. In this paper, we take an important step in this direction by developing a probabilistic extension of Datalog+ / −. This extension uses Markov logic networks as the underlying probabilistic semantics. Here, we focus especially on scalable algorithms for answering threshold queries, which correspond to the question “what is the set of all ground atoms that are inferred from a given probabilistic ontology with a probability of at least p?”. These queries are especially relevant to Web information extraction, since uncertain rules lead to uncertain facts, and only information with a certain minimum confidence is desired. We present several algorithms, namely a basic approach, an anytime one, and one based on heuristics, which is guaranteed to return sound results. Furthermore, we also study inconsistency in probabilistic Datalog+ / − ontologies. We propose two approaches for computing preferred repairs based on two different notions of distance between repairs, namely symmetric and score-based distance. We also study the complexity of the decision problems corresponding to computing such repairs, which turn out to be polynomial and NP-complete in the data complexity, respectively.
Article
We extend the description logic EL++ with re∞exive roles and range restrictions, and show that subsumption remains tractable if a certain syntactic restriction is adopted. We also show that subsumption becomes PSpace-hard (resp. undecidable) if this restriction is weakened (resp. dropped). Additionally, we prove that tractability is lost when symmetric roles are added: in this case, subsumption becomes ExpTime- hard.
Article
We consider two approaches to giving semantics to first-order logics of probability. The first approach puts a probability on the domain, and is appropriate for giving semantics to formulas involving statistical information such as “The probability that a randomly chosen bird flies is greater than 0.9.” The second approach puts a probability on possible worlds, and is appropriate for giving semantics to formulas describing degrees of belief such as “The probability that Tweety (a particular bird) flies is greater than 0.9.” We show that the two approaches can be easily combined, allowing us to reason in a straightforward way about statistical information and degrees of belief. We then consider axiomatizing these logics. In general, it can be shown that no complete axiomatization is possible. We provide axiom systems that are sound and complete in cases where a complete axiomatization is possible, showing that they do allow us to capture a great deal of interesting reasoning about probability.
Article
Because many artificial intelligence applications require the ability to reason with uncertain knowledge, it is important to seek appropriate generalizations of logic for that case. We present here a semantical generalization of logic in which the truth values of sentences are probability values (between 0 and 1). Our generalization applies to any logical system for which the consistency of a finite set of sentences can be established. The method described in the present paper combines logic with probability theory in such a way that probabilistic logical entailment reduces to ordinary logical entailment when the probabilities of all sentences are either 0 or 1.
Article
The work in this paper is directed towards sophisticated formalisms for reasoning under probabilistic uncertainty in ontologies in the Semantic Web. Ontologies play a central role in the development of the Semantic Web, since they provide a precise definition of shared terms in web resources. They are expressed in the standardized web ontology language OWL, which consists of the three increasingly expressive sublanguages OWL Lite, OWL DL, and OWL Full. The sublanguages OWL Lite and OWL DL have a formal semantics and a reasoning support through a mapping to the expressive description logics SHIF(D) and SHOIN(D), respectively. In this paper, we present the expressive probabilistic description logics P-SHIF(D) and P-SHOIN(D), which are probabilistic extensions of these description logics. They allow for expressing rich terminological probabilistic knowledge about concepts and roles as well as assertional probabilistic knowledge about instances of concepts and roles. They are semantically based on the notion of probabilistic lexicographic entailment from probabilistic default reasoning, which naturally interprets this terminological and assertional probabilistic knowledge as knowledge about random and concrete instances, respectively. As an important additional feature, they also allow for expressing terminological default knowledge, which is semantically interpreted as in Lehmann's lexicographic entailment in default reasoning from conditional knowledge bases. Another important feature of this extension of SHIF(D) and SHOIN(D) by probabilistic uncertainty is that it can be applied to other classical description logics as well. We then present sound and complete algorithms for the main reasoning problems in the new probabilistic description logics, which are based on reductions to reasoning in their classical counterparts, and to solving linear optimization problems. In particular, this shows the important result that reasoning in the new probabilistic description logics is decidable/computable. Furthermore, we also analyze the computational complexity of the main reasoning problems in the new probabilistic description logics in the general as well as restricted cases.
Conference Paper
We describe a system that supports arbi- trarily complex SQL queries on probabilis- tic databases. The query semantics is based on a probabilistic model and the results are ranked, much like in Information Retrieval. Our main focus is ecient query evaluation, a problem that has not received attention in the past. We describe an optimization algorithm that can compute eciently most queries. We show, however, that the data complexity of some queries is #P-complete, which implies that these queries do not admit any ecient evaluation methods. For these queries we de- scribe both an approximation algorithm and a Monte-Carlo simulation algorithm.
Article
We describe a framework for supporting arbitrarily complex SQL queries with "uncertain" predicates. The query semantics is based on a probabilistic model and the results are ranked, much like in Information Retrieval. Our main focus is query evaluation. We describe an optimization algorithm that can compute efficiently most queries. We show, however, that the data complexity of some queries is #P-complete, which implies that these queries do not admit any efficient evaluation methods. For these queries we describe both an approximation algorithm and a Monte-Carlo simulation algorithm.
Markov Logic Networks with Numerical Constraints
  • Jakob Melisachew Wudage Chekol
  • Christian Huber
  • Heiner Meilicke
  • Stuckenschmidt
  • Chekol Melisachew Wudage
Melisachew Wudage Chekol, Jakob Huber, Christian Meilicke, and Heiner Stuckenschmidt. 2016. Markov Logic Networks with Numerical Constraints. In ECAI 2016. 1017-1025.
The Incredible ELK: From Polynomial Procedures to Efficient Reasoning with ε Ontologies
  • Yevgeny Kazakov
  • Markus Krötzsch
  • František Simančík
Yevgeny Kazakov, Markus Krötzsch, and František Simančík. 2013. The Incredible ELK: From Polynomial Procedures to Efficient Reasoning with E L Ontologies. Journal of Automated Reasoning 53 (2013), 1-61. Issue 1.
YAGO2: exploring and querying world knowledge in time, space, context, and many languages
  • Johannes Hoffart
  • M Fabian
  • Klaus Suchanek
  • Edwin Berberich
  • Gerard De Lewis-Kelham
  • Gerhard Melo
  • Weikum
PR-OWL 2 RL - A Language for Scalable Uncertainty Reasoning on the Semantic Web information
  • L Laécio
  • Rommel N Santos
  • Marcelo Carvalho
  • Weigang Ladeira
  • Gilson Libório Li
  • Mendes
Laécio L. dos Santos, Rommel N. Carvalho, Marcelo Ladeira, Weigang Li, and Gilson Libório Mendes. 2015. PR-OWL 2 RL -A Language for Scalable Uncertainty Reasoning on the Semantic Web information. In Proceedings of the 11th International Workshop on Uncertainty Reasoning for the Semantic Web (URSW 2015) co-located with the 14th International Semantic Web Conference (ISWC 2015), Bethlehem, USA, October 12, 2015. 14-25.
Reasoning with probabilistic ontologies
  • Elena Fabrizio Riguzzi
  • Evelina Bellodi
  • Riccardo Lamma
  • Zese
  • Riguzzi Fabrizio
DBpedia: a nucleus for a web of open data, Proceedings of the 6th international The semantic web and 2nd Asian conference on Asian semantic web conference
  • Sören Auer
  • Christian Bizer
  • Georgi Kobilarov
  • Jens Lehmann
  • Richard Cyganiak
  • Zachary Ives
Bayesian Description Logics
  • Rafael Ismail Ilkan Ceylan
  • Penaloza
  • Ceylan Ismail Ilkan
Ismail Ilkan Ceylan and Rafael Penaloza. 2014. Bayesian Description Logics. In DL 2014. CEUR Workshop Proceedings, Vol. 1193. 447-458.
Google ScholarDigital Library
  • H Stephen
  • Matthias Bach
  • Bert Broecheler
  • Lise Huang
  • Getoor
Never Ending Learning
  • M Tom
  • William W Mitchell
  • Cohen
  • Partha Pratim Estevam R Hruschka
  • Justin Talukdar
  • Andrew Betteridge
  • Carlson
  • Matthew Bhavana Dalvi Mishra
  • Bryan Gardner
  • Jayant Kisiel
  • Krishnamurthy
  • Mitchell Tom M
  • H Stephen
  • Matthias Bach
  • Bert Broecheler
  • Lise Huang
  • Getoor
Stephen H. Bach, Matthias Broecheler, Bert Huang, and Lise Getoor. 2015. Hinge-Loss Markov Random Fields and Probabilistic Soft Logic. arXiv:1505.04406 [cs.LG] (2015).
  • M Tom
  • William W Mitchell
  • Cohen
  • Partha Pratim Estevam R Hruschka
  • Justin Talukdar
  • Andrew Betteridge
  • Carlson
  • Matthew Bhavana Dalvi Mishra
  • Bryan Gardner
  • Jayant Kisiel
  • Krishnamurthy
Tom M Mitchell, William W Cohen, Estevam R Hruschka Jr, Partha Pratim Talukdar, Justin Betteridge, Andrew Carlson, Bhavana Dalvi Mishra, Matthew Gardner, Bryan Kisiel, Jayant Krishnamurthy, et al. 2015. Never Ending Learning..