Article

The Independent Choice Logic for modelling multiple agents under uncertainty

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Inspired by game theory representations, Bayesian networks, influence diagrams, structured Markov decision process models, logic programming, and work in dynamical systems, the independent choice logic (ICL) is a semantic framework that allows for independent choices (made by various agents, including nature) and a logic program that gives the consequence of choices. This representation can be used as a specification for agents that act in a world, make observations of that world and have memory, as well as a modelling tool for dynamic environments with uncertainty. The rules specify the consequences of an action, what can be sensed and the utility of outcomes. This paper presents a possible-worlds semantics for ICL, and shows how to embed influence diagrams, structured Markov decision processes, and both the strategic (normal) form and extensive (game-tree) form of games within the ICL. It is argued that the ICL provides a natural and concise representation for multi-agent decision-making under uncertainty that allows for the representation of structured probability tables, the dynamic construction of networks (through the use of logical variables) and a way to handle uncertainty and decisions in a logical representation.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... The goal is to select the strategy (set or sequence of actions) that maximizes the utility function. While the field of decision theory has devoted a lot of effort to deal with various forms of knowledge and uncertainty, there are so far only a few approaches that are able to cope with both uncertainty and rich logical or relational representations (see (Poole 1997;Nath and Domingos 2009;Chen and Muggleton 2009)). This is surprising, given the popularity of such representations in the field of statistical relational learning (Getoor and Taskar 2007;De Raedt et al. 2008). ...
... While DTProbLog's representation and spirit are related to those of e.g. (Poole 1997) for ICL, (Chen and Muggleton 2009) for SLPs, and (Nath and Domingos 2009) for MLNs, its inference mechanism is distinct in that it employs state-of-the-art techniques using decision diagrams for computing the optimal strategy exactly; cf. the related work section for a more detailed comparison. ...
... Several AI-subfields are related to DTPROBLOG, either because they focus on the same problem setting or because they use compact data-structures for decision problems. Closely related is the independent choice logic (ICL) (Poole 1997), which shares its distribution semantics (Sato 1995) with ProbLog, and which can represent the same kind of decision problems as DTPROBLOG. Similar to DTPROBLOG being an extension of an existing language ProbLog, so have two related system been extended towards utilities recently. ...
Article
We introduce DTProbLog, a decision-theoretic extension of Prolog and its probabilistic variant ProbLog. DTProbLog is a simple but expressive probabilistic programming language that allows the modeling of a wide variety of domains, such as viral marketing. In DTProbLog, the utility of a strategy (a particular choice of actions) is defined as the expected reward for its execution in the presence of probabilistic effects. The key contribution of this paper is the introduction of exact, as well as approximate, solvers to compute the optimal strategy for a DTProbLog program and the decision problem it represents, by making use of binary and algebraic decision diagrams. We also report on experimental results that show the effectiveness and the practical usefulness of the approach.
... Hence, in this case, our distribution would be as follows: P (W 1 ) = 0.5 * 0.6 = 0.3, P (W 2 ) = 0.5 * 0.4 = 0.2, P (W 3 ) = 0.5 * 0.6 = 0.3, and P (W 4 ) = 0.5 * 0.4 = 0.2, with P (W 1 ) + P (W 2 ) + P (W 3 ) + P (W 4 ) = 1. The distribution semantics is likely the most popular approach in the context of probabilistic logic programming, where several languages have emerged, e.g., ProbLog [24], Logic Programs with Annotated Disjunctions [30], Independent Choice Logic [22], PRISM [27], etc. We therefore think that it is worth considering an equivalent approach in the field of rewriting too. ...
... In this paper, we have presented a new approach to combine term rewriting and probabilities, resulting in a more expressive formalism that can model complex relationships and uncertainty. In particular, we have defined a distribution semantics inspired by the one introduced by Sato for logic programs [26], which is currently very popular and forms the basis of several probabilistic logic languages (e.g., ProbLog [24], Logic Programs with Annotated Disjunctions [30], Independent Choice Logic [22] or PRISM [27]). Thus, we think that a distribution semantics can also play a significant role in the context of term rewriting. ...
Preprint
Full-text available
Probabilistic programming is becoming increasingly popular thanks to its ability to specify problems with a certain degree of uncertainty. In this work, we focus on term rewriting, a well-known computational formalism. In particular, we consider systems that combine traditional rewriting rules with probabilities. Then, we define a distribution semantics for such systems that can be used to model the probability of reducing a term to some value. We also show how to compute a set of "explanations" for a given reduction, which can be used to compute its probability. Finally, we illustrate our approach with several examples and outline a couple of extensions that may prove useful to improve the expressive power of probabilistic rewrite systems.
... In recent years, researchers have sought to combine the benefits of logic with the power of POMDPs. However, there has been very little research on relational POMDPs, and work has mostly focused either on efficient algorithms for propositional or grounded representations (Boutilier and Poole 1996;Hansen and Feng 2000;Geffner and Bonet 1998;Shani et al. 2008), or on representation (and not algorithmic) issues in the relational case (Poole 1997;Bacchus, Halpern, and Levesque 1999). The first approach to handle both aspects (Wang and Schmolze 2005) developed a simple model for relational POMDPs and a compact representation for grounded belief states, and used heuristic search with an algorithm for updating the belief state. ...
... The general case can be handled in a similar way by capturing the conditional dependencies; we defer the details to the full version of the paper. An alternative approach (Poole 1997) associates each observation with something similar to a deterministic action alternative, and can be captured in our model as well. Each of the two approaches can be used to capture some situations more compactly than the other. ...
Article
Relational Markov Decision Processes (MDP) are a useful abstraction for stochastic planning problems since one can develop abstract solutions for them that are independent of domain size or instantiation. While there has been an increased interest in developing relational fully observable MDPs, there has been very little work on relational partially observable MDPs (POMDP), which deal with uncertainty in problem states in addition to stochastic action effects. This paper provides a concrete formalization of relational POMDPs making several technical contributions toward their solution. First, we show that to maintain correctness one must distinguish between quantification over states and quantification over belief states; this implies that solutions based on value iteration are inherently limited to the finite horizon case. Second, we provide a symbolic dynamic programing algorithm for finite horizon relational POMDPs, solving them at an abstract level, by lifting the propositional incremental pruning algorithm. Third, we show that this algorithm can be implemented using first order decision diagrams, a compact representation for functions over relational structures, that has been recently used to solve relational MDPs.
... The logic programming track extended the concepts of logical programming languages like Prolog, which have a first-order expressivity for models without uncertainty, by adding the ability to address probabilistic uncertainty. Efforts included Independent Choice Logic (Poole 1997), Bayesian Logic (Milch et al. 2007), and Church (Goodman et al. 2012). The second track, probabilistic graphical models, extended graphical models from a fixed-entity or limited variable entity count propositional logic level to a first-order level. ...
... In this paper, we visualize our patterns using BN fragments. This work can be readily applied to probabilistic logic programs, as any BN can be implemented using an equivalently expressive probabilistic logic language (Poole 2008). ...
Article
Full-text available
First-order expressive capabilities allow Bayesian networks (BNs) to model problem domains where the number of entities, their attributes, and their relationships can vary significantly between model instantiations. First-order BNs are well-suited for capturing knowledge representation dependencies, but literature on design patterns specific to first-order BNs is few and scattered. To identify useful patterns, we investigated the range of dependency models between combinations of random variables (RVs) that represent unary attributes, functional relationships, and binary predicate relationships. We found eight major patterns, grouped into three categories, that cover a significant number of first-order BN situations. Selection behavior occurs in six patterns, where a relationship/attribute identifies which entities in a second relationship/attribute are applicable. In other cases, certain kinds of embedded dependencies based on semantic meaning are exploited. A significant contribution of our patterns is that they describe various behaviors used to establish the RV’s local probability distribution. Taken together, the patterns form a modeling framework that provides significant insight into first-order expressive BNs and can reduce efforts in developing such models. To the best of our knowledge, there are no comprehensive published accounts of such patterns.
... There is a substantial amount of work on languages combining non-monotonic logical reasoning with reasoning about probability which is different from P-log. The original P-log paper [8] presents the comparison with some of this work, including L-PLP [29], PRISM [43], NS-PLP [33], SLP [32], PKB [34], BLP [25], LPAD [45], ICL [40], PHA [40] and others. Since these languages contain neither partial functions nor other features of P-log which are refined in our paper, this comparison is still valid. ...
... There is a substantial amount of work on languages combining non-monotonic logical reasoning with reasoning about probability which is different from P-log. The original P-log paper [8] presents the comparison with some of this work, including L-PLP [29], PRISM [43], NS-PLP [33], SLP [32], PKB [34], BLP [25], LPAD [45], ICL [40], PHA [40] and others. Since these languages contain neither partial functions nor other features of P-log which are refined in our paper, this comparison is still valid. ...
Article
Full-text available
This paper focuses on the investigation and improvement of knowledge representation language P-log that allows for both logical and probabilistic reasoning. We refine the definition of the language by eliminating some ambiguities and incidental decisions made in its original version and slightly modify the formal semantics to better match the intuitive meaning of the language constructs. We also define a new class of coherent (i.e., logically and probabilistically consistent) P-log programs which facilitates their construction and proofs of correctness. There are a query answering algorithm, sound for programs from this class, and a prototype implementation which, due to their size, are not included in the paper. They, however, can be found in the dissertation of the first author.
... We have examined the complexity of Relational Bayesian Networks elsewhere [88]; some results and proofs, but not all of them, are similar to the ones presented here. There are also languages that encode repetitive Bayesian networks using functional programming [87,89,125,102] or logic programming [25,44,103,104,114,117], We have examined the complexity of the latter formalisms elsewhere [27,28,29]; again, some results and proofs, but not all of them, are similar to the ones presented here. ...
Preprint
We examine the complexity of inference in Bayesian networks specified by logical languages. We consider representations that range from fragments of propositional logic to function-free first-order logic with equality; in doing so we cover a variety of plate models and of probabilistic relational models. We study the complexity of inferences when network, query and domain are the input (the inferential and the combined complexity), when the network is fixed and query and domain are the input (the query/data complexity), and when the network and query are fixed and the domain is the input (the domain complexity). We draw connections with probabilistic databases and liftability results, and obtain complexity classes that range from polynomial to exponential levels.
... The most common semantics for these programs is the distribution semantics [13], which assigns to each ground program a joint probability distribution over the atoms occurring in it. It is the basis for many programming languages such as the Independent Choice Logic [9], PRISM [14], Logic Programs with Annotated Disjunctions [15] and ProbLog [1]. ...
Preprint
Full-text available
Pearl and Verma developed d-separation as a widely used graphical criterion to reason about the conditional independencies that are implied by the causal structure of a Bayesian network. As acyclic ground probabilistic logic programs correspond to Bayesian networks on their dependency graph, we can compute conditional independencies from d-separation in the latter. In the present paper, we generalize the reasoning above to the non-ground case. First, we abstract the notion of a probabilistic logic program away from external databases and probabilities to obtain so-called program structures. We then present a correct meta-interpreter that decides whether a certain conditional independence statement is implied by a program structure on a given external database. Finally, we give a fragment of program structures for which we obtain a completeness statement of our conditional independence oracle. We close with an experimental evaluation of our approach revealing that our meta-interpreter performs significantly faster than checking the definition of independence using exact inference in ProbLog 2.
... The most common semantics for these programs is the distribution semantics [13], which assigns to each ground program a joint probability distribution over the atoms occurring in it. It is the basis for many programming languages such as the Independent Choice Logic [9], PRISM [14], Logic Programs with Annotated Disjunctions [15] and ProbLog [1]. ...
Article
Pearl and Verma developed d-separation as a widely used graphical criterion to reason about the conditional independencies that are implied by the causal structure of a Bayesian network. As acyclic ground probabilistic logic programs correspond to Bayesian networks on their dependency graph, we can compute conditional independencies from d-separation in the latter. In the present paper, we generalize the reasoning above to the non-ground case. First, we abstract the notion of a probabilistic logic program away from external databases and probabilities to obtain so-called program structures. We then present a meta-interpreter that decides whether a certain conditional independence statement is implied by a program structure on a given external database. Finally, we give a fragment of program structures for which we obtain a completeness statement of our conditional independence oracle. We close with an experimental evaluation of our approach revealing that our meta-interpreter performs significantly faster than checking the definition of independence using exact inference in ProbLog 2.
... Bu ED tipi BA'da Yapılandırılmış Bilgiyi Geri getirme uygulaması geliştirilerek oluşturulmuştur (de Campos ve diğerleri, 2005).Genişletilmiş ED'ler (GED) GED'ler, geleneksel ED'lerin düğüm ve ark tanımlarının tartışmaya yer bırakmayacak şekilde sözlüksel olarak yapıldığı diyagramlardır (Johnson, Lagerstörm, Narman ve Marten, 2007).Bulanık ED'ler (BED)BED'ler karar problemlerini gerçek hayat problemlerinde yer alan belirsizlikleri yalnızca olasılık dağılımları ile değil, aynı zamanda bu ED içindeki düğüm ve arkların kesin sayı değeri yerine bulanık değerler aldığı bir yapıda incelerler(Hui ve Yan-ling, 2009;Liu, Bai ve Yu, 2006;Lopez-Diaz ve Rodriguez-Muniz, 2007; Peng, 2009; Zhi ve Xue-jin, 2010).Oyun Teorisi Temelli ED'ler (OTTED)OTTED'ler, geleneksel ED'lerden farklı olarak yalnızca statik faktörleri değil, zaman içinde değişmekte olan dinamik değişkenleri de inceleyebilirler. Bu teknik, özellikle yüksek seviyeli rekabetin olduğu gerçek hayat problemlerinde karar analizleri için kullanılmaktadır (Zhou, Lü ve Liu, 2012).Çok Aktörlü ED'ler (ÇAED)ÇAED, karar vericinin birden fazla olduğu, grup karar vermesi gerçekleştirilen ED tipidir.Burada kararı etkileyen faktörleri belirleyen ve bunları değerlendiren bir karar verme takımı bulunur(Koller ve Milch, 2003; Nishiyama ve Sawaragi, 2011;Poole, 1997;Shi ve Lin, 2010; Yang, Si-feng, Zhi-geng ve Ming-li, 2011; Zeng ve Poh, 2009).Etkileşimli Dinamik ED'ler (EDED) ÇAED'lerin zaman kısıdını da göz önüne alan, sıralı çok aktörlü karar problemlerinin değerlendirilmesi amacıyla geliştirilen bir ED tipidir (Doshi, Chandrasekaran ve Zeng, 2010; Luo ve Tian, 2011; Luo, Li ve Zeng, 2011; Luo, Yin, Li ve Wu, 2011; Zeng ve Xiang, 2010; Zeng, Chen ve Doshi, 2011; Zeng ve Doshi, 2012). Olabilirlikli ED'ler (OED) Durum ve karar değişkenleri arasındaki yerel bağımlılıkları inceleyebilmek amacıyla kullanılır. ...
Thesis
Özet: Günümüzün ekonomik şartları altında ekipman seçimi, şirketler için hayati öneme sahip bir karar verme problemidir. Uygun olmayan ekipman seçimi, bir üretim sisteminin üretkenliğini ve toplam performansını olumsuz yönde etkiler, maliyetleri arttırır ve firma prestijini zedeler. Ekipman seçimi, içinde birçok çelişen kriteri, olası alternatifi ve kavramı barındıran, yönetilmesi karmaşık ve zor bir süreçtir. Bu çalışmada, Ankara'da faaliyet gösteren bir üretim firmasının ekipman seçimi problemi ele alınmış ve bu problemin çözümü için yeni bir yaklaşım geliştirilmiştir. Bu yaklaşım, seçim sürecinin çok kriterli, bulanık ve çelişkiler içeren yapısının en etkin şekilde üstesinden gelebilmek ve matematiksel modellerin sağladığı güvenilirlik ve doğruluktan yararlanabilmek temelinde geliştirilmiştir. Çalışmada, değerlendirmede kullanılan dilsel ifadelerin etkisinin seçim sürecine doğru ve tam olarak yansıtılabilmesi ve matematiksel bir model yardımı ile belirlenen hedeflerinin ağırlıklarının dikkate alınması sağlanmıştır. Anahtar Kelimeler :Ekipman seçimi, çok kriterli karar verme, F-PROMETHEE yöntemi, 0-1 hedef programlama yöntemi Abstract: Equipment selection is a decision making problem which has a vital importance, according to the economical circumstances of today. An unsuiatble equipment selection effects productivity and the overall performance of the production system negatively, increases costs and damages the company?s prestige. Equipment selection is a complicated and difficult process to be managed, containing a lot of conflicting criteria, possible alternatives and conceptions. In this study, an equipment selection problem of a company operating in Ankara is handled and a new approach for solution of this problem is developed. This approach is constituted with the aim of successfully overcoming the multi criteria, fuzzy and having confilctions structure of the selection process and benefiting from reliability and accuracy that mathematical models provide. Reflecting the effects of the linguistic terms used in the evaluation to the selection process truely and properly and, developing an analytical solution method with the help of a mathematical model which stands for the weights of the determined goals are provided in the study. Key Words : Equipment selection, multi criteria decision making, F- PROMETHEE method, 0 ? 1 Goal programming method
... Possible world semantics is common in probabilistic logic programming and relational probabilistic models (Renkens et al. 2012;Kwiatkowska, Norman, and Parker 2002;Poole 1997). OpenPDBs extend this semantics to a (finite) open universe, and allow imprecise probabilities (Levi 1980) for tuples in this universe. ...
Article
Probabilistic databases (PDBs) are usually incomplete, e.g., containing only the facts that have been extracted from the Web with high confidence. However, missing facts are often treated as being false, which leads to unintuitive results when querying PDBs. Recently, open-world probabilistic databases (OpenPDBs) were proposed to address this issue by allowing probabilities of unknown facts to take any value from a fixed probability interval. In this paper, we extend OpenPDBs by Datalog+/- ontologies, under which both upper and lower probabilities of queries become even more informative, enabling us to distinguish queries that were indistinguishable before. We show that the dichotomy between P and PP in (Open)PDBs can be lifted to the case of first-order rewritable positive programs (without negative constraints); and that the problem can become NP^PP-complete, once negative constraints are allowed. We also propose an approximating semantics that circumvents the increase in complexity caused by negative constraints.
... The large majority of PILP techniques proposed so far fall into the learning from interpretations setting including parameter estimation of probabilistic logic programs [44], learning of probabilistic relational models [45], parameter estimation of relational Markov models [46], learning of object-oriented Bayesian networks [47], learning relational dependency networks [48], and learning logic programs with annotated disjunctions [49]. To define probabilities on proofs, ICL [50], Prism [51], and stochastic logic programs [52] attach probabilities to facts (respectively clauses) and treat them as stochastic choices within resolution. PILP techniques that learning from proofs have been developed including Hidden Markov model induction by Bayesian model merging [53], relational Markov models [54], and logical hidden Markov models [55]. ...
Preprint
Despite recent advances in modern machine learning algorithms, the opaqueness of their underlying mechanisms continues to be an obstacle in adoption. To instill confidence and trust in artificial intelligence systems, Explainable Artificial Intelligence has emerged as a response to improving modern machine learning algorithms' explainability. Inductive Logic Programming (ILP), a subfield of symbolic artificial intelligence, plays a promising role in generating interpretable explanations because of its intuitive logic-driven framework. ILP effectively leverages abductive reasoning to generate explainable first-order clausal theories from examples and background knowledge. However, several challenges in developing methods inspired by ILP need to be addressed for their successful application in practice. For example, existing ILP systems often have a vast solution space, and the induced solutions are very sensitive to noises and disturbances. This survey paper summarizes the recent advances in ILP and a discussion of statistical relational learning and neural-symbolic algorithms, which offer synergistic views to ILP. Following a critical review of the recent advances, we delineate observed challenges and highlight potential avenues of further ILP-motivated research toward developing self-explanatory artificial intelligence systems.
... In order to learn (probabilistic) theories from probabilistic data, PILP systems need to rely on a specific language that can code probabilities. There are many probabilistic inference systems that can represent and manipulate probabilities, such as SLP [24], ICL [28], Prism [32], BLP [19], CLP(BN ) [30], MLN [29] and ProbLog [20]. Whilst there are a number of probabilistic logic languages in the literature, there are few works dedicated to performing structure learning over one of these languages and thus representing uncertain knowledge in a human-readable form. ...
Article
Full-text available
Probabilistic inductive logic programming (PILP) is a statistical relational learning technique which extends inductive logic programming by considering probabilistic data. The ability to use probabilities to represent uncertainty comes at the cost of an exponential evaluation time when composing theories to model the given problem. For this reason, PILP systems rely on various pruning strategies in order to reduce the search space. However, to the best of the authors’ knowledge, there has been no systematic analysis of the different pruning strategies, how they impact the search space and how they interact with one another. This work presents a unified representation for PILP pruning strategies which enables end-users to understand how these strategies work both individually and combined and to make an informed decision on which pruning strategies to select so as to best achieve their goals. The performance of pruning strategies is evaluated both time and quality-wise in two state-of-the-art PILP systems with datasets from three different domains. Besides analysing the performance of the pruning strategies, we also illustrate the utility of PILP in one of the application domains, which is a real-world application.
... First-order open-universe POMDPs follow the openuniverse assumption for the first-order description of an environment using Bayesian logic as a basis (Srivastava et al. 2014). Another first-order representation is the language of independent choice logic that also allows for a set of agents (Poole 1997). To the best of our knowledge, we are the first to consider lifting the agent set, which finds its application in nanoscale systems where large groups of indistinguishable nanodevices act jointly in an environment. ...
Preprint
Full-text available
DNA-based nanonetworks have a wide range of promising use cases, especially in the field of medicine. With a large set of agents, a partially observable stochastic environment, and noisy observations, such nanoscale systems can be modelled as a decentralised, partially observable, Markov decision process (DecPOMDP). As the agent set is a dominating factor, this paper presents (i) lifted DecPOMDPs, partitioning the agent set into sets of indistinguishable agents, reducing the worst-case space required, and (ii) a nanoscale medical system as an application. Future work turns to solving and implementing lifted DecPOMDPs.
... Example 1 illustrates a program without function symbols. If a program has at least one variable, one constant, and one function symbol, its grounding is infinite, so the previous definition must be extended [15]. ...
Preprint
Full-text available
Hybrid probabilistic logic programs can represent several scenarios thanks to the expressivity of Logic Programming extended with facts representing discrete and continuous distributions. The semantics for this type of programs is crucial since it ensures that a probability can be assigned to every query. Here, following one recent semantics proposal, we illustrate a concrete syntax, and we analyse the syntactic requirements needed to preserve the well-definedness.
... Example 1 illustrates a program without function symbols. If a program has at least one variable, one constant, and one function symbol, its grounding is infinite, so the previous definition must be extended [15]. ...
... Mathematically, it is formalized by going from real to complex-valued calculus as envisioned by Kauman & Varela (1980); Schar & Cooper (2005). In logical terms Poole (1997); Sloman & Hagmayer (2006), this corresponds to generalization from between classical-Boolean and quantum logic of decision making Bruza & Cole (2005) , -June, 283298. doi:10.28995/2075, ...
Preprint
Full-text available
The paper describes a model of subjective goal-oriented semantics extending standard "view-from-nowhere" approach. Generalization is achieved by using a spherical vector structure essentially supplementing the classical bit with circular dimension, organizing contexts according to their subjective causal ordering. This structure, known in quantum theory as qubit, is shown to be universal representation of contextual-situated meaning at the core of human cognition. Subjective semantic dimension, inferred from fundamental oscillation dynamics, is discretized to six process-stage prototypes expressed in common language. Predicted process-semantic map of natural language terms is confirmed by the open-source word2vec data.
... The concept of AI was heralded by Alan Turing in the 1950s, who proposed the Turing test to measure a form of natural language (symbolic) communication between humans and machines. In the 1960s, Lutfi Zadah proposed fuzzy logic with dominant knowledge representation and mobile robots [1]. Stanford University created the Automated Mathematician to explore new mathematical theories based on a heuristic algo-rithm. ...
Article
Full-text available
Artificial intelligence (AI) is progressively changing techniques of teaching and learning. In the past, the objective was to provide an intelligent tutoring system without intervention from a human teacher to enhance skills, control, knowledge construction, and intellectual engagement. This paper proposes a definition of AI focusing on enhancing the humanoid agent Nao’s learning capabilities and interactions. The aim is to increase Nao intelligence using big data by activating multisensory perceptions such as visual and auditory stimuli modules and speech-related stimuli, as well as being in various movements. The method is to develop a toolkit by enabling Arabic speech recognition and implementing the Haar algorithm for robust image recognition to improve the capabilities of Nao during interactions with a child in a mixed reality system using big data. The experiment design and testing processes were conducted by implementing an AI principle design, namely, the three-constituent principle. Four experiments were conducted to boost Nao’s intelligence level using 100 children, different environments (class, lab, home, and mixed reality Leap Motion Controller (LMC). An objective function and an operational time cost function are developed to improve Nao’s learning experience in different environments accomplishing the best results in 4.2 seconds for each number recognition. The experiments’ results showed an increase in Nao’s intelligence from 3 to 7 years old compared with a child’s intelligence in learning simple mathematics with the best communication using a kappa ratio value of 90.8%, having a corpus that exceeded 390,000 segments, and scoring 93% of success rate when activating both auditory and vision modules for the agent Nao. The developed toolkit uses Arabic speech recognition and the Haar algorithm in a mixed reality system using big data enabling Nao to achieve a 94% success learning rate at a distance of 0.09 m; when using LMC in mixed reality, the hand sign gestures recorded the highest accuracy of 98.50% using Haar algorithm. The work shows that the current work enabled Nao to gradually achieve a higher learning success rate as the environment changes and multisensory perception increases. This paper also proposes a cutting-edge research work direction for fostering child-robots education in real time. 1. Introduction Artificial intelligence (AI) was introduced half a century ago. Researchers initially wanted to build an electronic brain equipped with a natural form of intelligence. The concept of AI was heralded by Alan Turing in the 1950s, who proposed the Turing test to measure a form of natural language (symbolic) communication between humans and machines. In the 1960s, Lutfi Zadah proposed fuzzy logic with dominant knowledge representation and mobile robots [1]. Stanford University created the Automated Mathematician to explore new mathematical theories based on a heuristic algorithm. However, AI had become unpopular in the 1970s due to its inability to meet unrealistic expectations. The 1980s offered a promise for AI as sales of AI-based hardware and software for decision support applications exceeded $400 million [2]. By the 1990s, AI had entered a new era by integrating intelligent agent (IA) applications into different fields, such as games (Deep Blue, which is a chess program developed at Carnegie Mellon that defeated the world champion Garry Kasparov in 1997), spacecraft control, security (credit card fraud detection, face recognition), and transportation (automated scheduling systems) [3–7]. The beginning of the 21st century witnessed significant advances in AI in industrial business and government services with several initiatives, such as intelligent cities, intelligent economy, intelligent industry, and intelligent robots [3]. A unified definition of AI has not yet been offered; however, the concept of AI can be built from different definitions:(i)It is an interdisciplinary science because it interacts with cognitive science(ii)It uses creative techniques in modeling and mapping to improve average performance when solving complex problems(iii)It implements different processes to imitate intelligent human or animal behavior. Fourth, the developed system is either a virtual or a physical system with intelligent characteristics(iv)It attempts to duplicate human mental and sensory systems to model aspects of “humans” thoughts and behaviors(v)It passes the intelligence test if it interacts completely with other systems or creatures worldwide and in real time(vi)It follows a defined cycle of sense–plan–act The present study proposes the definition of AI as follows: “AI is an interdisciplinary science suitable for implementation in any domain that uses heuristic techniques, modeling, and AI-based design principles to solve complex problems. Single or combined processes in perceiving, reasoning, learning, understanding, and collaborating can improve system behavior and decision-making. The goal of AI is to enable virtual and physical intelligent agents, including humans and/or systems that continuously upgrade their intelligence to attain superintelligence. Agents should be able to integrate with one another in fully learning, teaching, adapting themselves to dynamic environments, communicating logically, and functioning efficiently with one another or with other creatures in the world and real time through sense–plan–act–react cycles.” The three-constituent principle for an agent suggests that “designing an intelligent agent involves constituents, the definition of the ecological niche, the definition of the desired behaviors and tasks, and design of the agent [8, 9].” Therefore, an agent’s intelligence can grow in time using the “here and now” perspective during interactions in different dynamic environments. In the present study, the robot agent Nao’s design is not among the required tasks, but the other two constituents are related to the environment and involve interactions with a human agent. Therefore, this work defines the ecological niche using different environments (a classroom, a lab, and a home), focusing on a mixed-reality environment. Nao’s functions are present according to the desired behavior as teaching simple mathematics to a child. The objective is to improve Nao’s learning ability and increase its intelligence. Thus, the study shows that “the three-constituents principle, the definition of the ecological niche, and the definition of the desired behaviors and tasks [2]” are sufficient to increase Nao’s intelligence.(i)The “here and now” perspective: related to three-time frames and shows that the behavior of any ‘agent’s system matures over a certain period and is associated with three states(ii)State-oriented: describes the actual mechanism of the agent at any instance of time(iii)Learning and development: relates to learning and development from state-oriented action(iv)Evolutionary: explains the emergence of a higher level of cognition through a phylogenetic perspective by emphasizing the power of artificial evolution and performing more complex tasks The Mixed Reality System TouchMe provides a third-person camera view of the system instead of human eyes [10]. The third-person camera view is considered more efficient for inexperienced users to interact with the robot [11]. Leutert et al. [12] reported using augmented spatial reality, a form of mixed reality, to relay information from the robot to the user’s workspace. They used a fixed-mobile projector. Socially aware interactive playgrounds [13] use various actuators to provide feedback to children. These actuators include projectors, speakers, and lights. These “interactive playgrounds can be placed at different locations, such as schools, streets, and gyms. Humans produce, interpret, and detect social signals (a communicative or informative signal conveyed directly or indirectly) [13].” Thus, their social signals can be used to enhance interactions with others. Various studies have been conducted on teaching humans to use robots in various environmental settings. RoboStage module implements learning among junior high school students through mixed reality systems [14–20]. Its creators compared the use of physical and virtual characters in a learning environment. RoboStage enables module interactions in robots to use voice and physical objects to achieve three stages of events: learning, situatedness, and blended. These events help students learn and practice activities, understand an environment, and execute an event. GENTORO uses a robot and a handheld projector to interact with children and perform a storytelling activity [21–27]. Its creators studied the effect of using a small handheld projector on the storytelling process. They also discussed the effects of using audio interactions instead of text and a wide-angle lens. The agent matures into an adult by which the process in any state is affected by its previous state. The present study has focused on state-oriented and learning and development states to observe its outcome in association with the evolutionary state [28–30]. The proposed definition enhances research at the experimental design level using multisensory technologies to improve intelligence interaction and growth by applying the AI design principle [31–33]. Enhanced interaction between humans and robots improves learning, especially in the case of a child. Motion and speech sensor nodes are fused to this end. Contemporary children are familiar with handheld devices such as mobile phones, tablets, pads, and virtual reality cameras. Therefore, the toolkit developed in this study uses a mixed reality system featuring different ways of interaction between a child and a robot agent. This study makes the following contributions.(i)Enhancing the humanoid robot Nao’s learning capabilities with the objective to increase the robot’s intelligence, using a multisensory perception of vision, hearing, speech, and gestures for HRI interactions(ii)Implementing Arabic Speech Agent for Nao using phonological knowledge and HMM to eventually activate child-robot communication [34](iii)It developed a toolkit using Arabic speech recognition and the Haar algorithm for robust image recognition in a mixed reality system architecture using big data enabling Nao to achieve a 94% success learning rate featuring different environments, and for LMC, the highest accuracy of 98.50% using the Haar algorithm The remainder of the study is organized mainly into Materials, Data, and Methods, which describe the architecture and experiment design, while the Discussion and Results section covers the intelligent big data management system using Haar algorithm-based Nao Agent Multisensory Communication in mixed reality and using LMC. Finally, the Conclusion and Future Work of the proposed study. 2. Materials, Data, and Methods The experiment initiated at King Abdul-Aziz University with an Aldebaran representative was related to a three-year-old robot Nao, which could not speak Arabic or solve simple mathematics. The study analysis was initiated by selecting the suitable artificial intelligence principle design for the study. The experiment’s goals and tasks were defined precisely to increase Nao’s intelligence to at least seven years old. The Nao mathematics intelligence measurements were based on solving 100 children’s exercises for basic addition, subtraction, and multiplication problems with human agents’ help. Nao also reached the level of understanding simple sentences for Arabic language speech recognition. The experiment time scale was set for a total of two years. The study is aimed at involving the robot Nao in the learning-teaching process using interaction and multisensory Nao agent perceptions by exposing Nao to different environments (see Figure 1), enabling communication concept design. However, the present work focused more on the mixed reality environment.
... The random variables X ij are independent of each other. An atomic choice (Poole, 1997) is a triple (C i , j , k) where C i ∈ P , j is a substitution that grounds C i and k ∈ {1, … , n i } identifies one of the head atoms. In practice (C i , j , k) corresponds to an assignment X ij = k. ...
Article
Full-text available
Probabilistic logic programming (PLP) combines logic programs and probabilities. Due to its expressiveness and simplicity, it has been considered as a powerful tool for learning and reasoning in relational domains characterized by uncertainty. Still, learning the parameter and the structure of general PLP is computationally expensive due to the inference cost. We have recently proposed a restriction of the general PLP language called hierarchical PLP (HPLP) in which clauses and predicates are hierarchically organized. HPLPs can be converted into arithmetic circuits or deep neural networks and inference is much cheaper than for general PLP. In this paper we present algorithms for learning both the parameters and the structure of HPLPs from data. We first present an algorithm, called parameter learning for hierarchical probabilistic logic programs (PHIL) which performs parameter estimation of HPLPs using gradient descent and expectation maximization. We also propose structure learning of hierarchical probabilistic logic programming (SLEAHP), that learns both the structure and the parameters of HPLPs from data. Experiments were performed comparing PHIL and SLEAHP with PLP and Markov Logic Networks state-of-the art systems for parameter and structure learning respectively. PHIL was compared with EMBLEM, ProbLog2 and Tuffy and SLEAHP with SLIPCOVER, PROBFOIL+, MLB-BC, MLN-BT and RDN-B. The experiments on five well known datasets show that our algorithms achieve similar and often better accuracies but in a shorter time.
... The Independent Choice Logic (Poole D. , 1997;Poole D. , 2008) is based on acyclic logic programs and a probability distribution over a mutually exclusive and completely exhaustive "choices". Each choice along with the logic program produces a first-order model. ...
Thesis
Full-text available
Ontologies (in the context of computer science) are engineering artifacts which formally represent knowledge in a domain, and are widely used as an instrument for enabling common understanding of the information among people or software agents. However, very little work can be found in the literature about ontology languages (and related software tools) that simultaneously support decision making under uncertainty and forward/backward compatibility with OWL, a W3C recommendation and the de facto standard for specifying ontologies. PR-OWL, a probabilistic extension to OWL provides the latter, but does not have standardized support for decision making. This work describes the PR-OWL Decision, which adds decision support to PR-OWL while keeping forward/backward compatibility with OWL. It also reports on an implementation of the extension and illustrates its use via case studies. The implementation includes a GUI and reasoning engine for PR-OWL Decision that were developed as part of this research. Both are based on Multi-Entity Decision Graph, an extension of Multi-Entity Bayesian Network for decision-making problems.
... Nevertheless, continuous efforts have been performed in the field of decision support, especially with models supporting uncertainty [9,10,11,12,13,14,15]. Decision making is the process of selecting a course of action among several possibilities, based on values or preferences of some decision maker. ...
Conference Paper
Full-text available
Decision making is a big topic in Intelligence, Defense, and Security fields. However, very little work can be found in the literature about ontology languages that simultaneously support decision making under uncertainty, abstractions/generalizations with first-order expressiveness, and forward/backward compatibility with OWL-a standard language for ontologies. This work proposes PROWL Decision, a language which extends PROWL an extension of OWL to support uncertainty-to support first-order expressiveness, decision making under uncertainty, and backward/forward compatibility with OWL and PROWL .
... However, it is common to both strands that the probability distributions in question are defined over 'possible worlds' (first-order models) which (by marginalisation) give for each closed logical formula the probability that it is true. For example, in Poole (1997) "Possible worlds are built by choosing propositions from sets of independent choice alternatives". ...
Article
Stochastic logic programs (SLPs) are logic programs with parameterised clauses which define a log-linear distribution over refutations of goals. The log-linear distribution provides, by marginalisation, a distribution over variable bindings, allowing SLPs to compactly represent quite complex distributions.We analyse the fundamental statistical properties of SLPs addressing issues concerning infinite derivations, 'unnormalised’ SLPs and impure SLPs. After detailing existing approaches to parameter estimation for log-linear models and their application to SLPs, we present a new algorithm called failure-adjusted maximisation (FAM). FAM is an instance of the EM algorithm that applies specifically to normalised SLPs and provides a closed-form for computing parameter updates within an iterative maximisation approach. We empirically show that FAM works on some small examples and discuss methods for applying it to bigger problems.
... The approach for assigning a semantics to PCLTs is inspired by the distribution semantics (Sato 1995): a probabilistic theory defines a distribution over non-probabilistic theories by assuming independence among the choices in probabilistic constructs. The distribution semantics has emerged as one of the most successful approaches in probabilistic logic programming (PLP) and underlies many languages such as Probabilistic Horn Abduction (Poole 1993), independent choice logic (Poole 1997), PRISM (Sato and Kameya 1997), Logic Programs with Annotated Disjunctions (Vennekens et al. 2004) and ProbLog (De Raedt et al. 2007). ...
Article
Full-text available
Probabilistic logical models deal effectively with uncertain relations and entities typical of many real world domains. In the field of probabilistic logic programming usually the aim is to learn these kinds of models to predict specific atoms or predicates of the domain, called target atoms/predicates. However, it might also be useful to learn classifiers for interpretations as a whole: to this end, we consider the models produced by the inductive constraint logic system, represented by sets of integrity constraints, and we propose a probabilistic version of them. Each integrity constraint is annotated with a probability, and the resulting probabilistic logical constraint model assigns a probability of being positive to interpretations. To learn both the structure and the parameters of such probabilistic models we propose the system PASCAL for “probabilistic inductive constraint logic”. Parameter learning can be performed using gradient descent or L-BFGS. PASCAL has been tested on 11 datasets and compared with a few statistical relational systems and a system that builds relational decision trees (TILDE): we demonstrate that this system achieves better or comparable results in terms of area under the precision–recall and receiver operating characteristic curves, in a comparable execution time.
... The brief statement is enough to make everybody understand that it is commonly accepted as the education of human through the presence of computer and without that problem solving could be a major issue. Human knowledge that has been converted into a format which is absolutely suitable for the application of artificial intelligence system and at the same time the knowledge generated by an artificial intelligence system perhaps by collecting data and information and after that analysing data or information and knowledge at its disposal which will help organisation to take many important business related decisions (Poole, 1997). ...
Article
Full-text available
Continuous adaptation and frugal innovation are extremely important components of the manufacturing industry. These factors are important as it leads to sustainable manufacturing using different modern technologies. To promote sustainable development Industry has realised that smart production is equally important but it requires transnational perspective and various technological adaptation to a large extent. Industries are involved in intensive research efforts in the area of artificial intelligence and different artificial intelligence enabled techniques such as machine learning etc. to establish its presence in different parts of the world with major focus towards sustainable manufacturing at core. The research paper is basically focusing on the application of artificial intelligence and machine learning in the industry. The research paper is trying to review various research papers from reputed journals to understand the depth of study already done in the field of artificial intelligence and machine learning. With the advent of industry 4.0, artificial Intelligence and machine learning are considered as the driving variables of smart manufacturing revolution in the industry. Smart manufacturing is tried by different organisation from India but they did not able to implement it at fullest possible way and one of the reason being the lack of application in the field of artificial intelligence or machine learning. Maximum number of cases as far as the implementation of artificial intelligence is concerned in industry can be found in US and European market.
... In this field, many languages are equipped with the distribution semantics (Sato 1995). Examples of such languages are Independent Choice Logic (Poole 1997), PRISM (Sato 1995), Logic Programs with Annotated Disjunctions (LPADs) (Vennekens et al. 2004a) and ProbLog (De Raedt et al. 2007). All these languages have the same expressive power, as a theory in one language can be translated into each of the others (De . ...
Preprint
In Probabilistic Logic Programming (PLP) the most commonly studied inference task is to compute the marginal probability of a query given a program. In this paper, we consider two other important tasks in the PLP setting: the Maximum-A-Posteriori (MAP) inference task, which determines the most likely values for a subset of the random variables given evidence on other variables, and the Most Probable Explanation (MPE) task, the instance of MAP where the query variables are the complement of the evidence variables. We present a novel algorithm, included in the PITA reasoner, which tackles these tasks by representing each problem as a Binary Decision Diagram and applying a dynamic programming procedure on it. We compare our algorithm with the version of ProbLog that admits annotated disjunctions and can perform MAP and MPE inference. Experiments on several synthetic datasets show that PITA outperforms ProbLog in many cases.
... The random variables X ij are independent of each other. An atomic choice [22] is a triple (C i , θ j , k) where C i ∈ T, θ j is a substitution that grounds C i and k ∈ {1, . . . , v i } identifies one of the head atoms. ...
Article
Full-text available
In Bitcoin, if a miner is able to solve a computationally hard problem called proof of work, it will receive an amount of bitcoin as a reward which is the sum of the fees for the transactions included in a block plus an amount inversely proportional to the number of blocks discovered so far. At the moment of writing, the block reward is several orders of magnitude greater than the sum of transaction fees. Usually, miners try to collect the largest reward by including transactions associated with high fees. The main purpose of transaction fees is to prevent network spamming. However, they are also used to prioritize transactions. In order to use the minimum amount of fees, users usually have to find a compromise between fees and urgency of a transaction. In this paper, we develop a probabilistic logic model to experimentally analyze how fees affect confirmation time and miner’s revenue and to predict if an increase of average fees will generate a situation when the miner gets more reward by not following the protocol.
... This work also showed that the knowledge in a discrete Bayesian network can be represented within this framework and vice versa. This approach was extended into the independent choice logic, which provides a richer logic and allows for multi-agent decision-making under uncertainty [13,14]. ...
Article
Full-text available
This paper explores the nature of competition between hypotheses and the effect of failing to model this relationship correctly when performing abductive inference. In terms of the nature of competition, the importance of the interplay between direct and indirect pathways, where the latter depends on the evidence under consideration, is investigated. Experimental results show that models which treat hypotheses as mutually exclusive or independent perform well in an abduction problem that requires identifying the most probable hypothesis, provided there is at least some positive degree of competition between the hypotheses. However, even in such cases a significant limitation of these models is their inability to identify a second hypothesis that may well also be true.
Article
Full-text available
The research presented in this paper originated from my master's thesis, and I have chosen to publish it together with my supervisor, who is the second author, to contribute to the existing body of knowledge. The technology known as the Internet of Things (IoT) continues to expand the current Internet infrastructure by facilitating connections and interactions between the physical and cyber worlds. IoT and its associated applications have significantly enhanced the quality of life on Earth. Advanced wireless sensor networks and their revolutionary computing capabilities have paved the way for various IoT applications to explore new frontiers, impacting nearly every aspect of daily life. Concurrently, the imperative of energy optimization has emerged as a major concern, driving the adoption of sustainable practices and green technologies. The fusion of Artificial Intelligence (AI) with IoT represents a potent combination, enabling the realization of unique projects and innovative solutions. The potential impact of IoT and AI is vast, promising transformative changes in the future landscape. Recognizing the magnitude of these advancements, the European Commission is committed to collaborating with partners and authorities in the Western Balkans to fully implement the digital agenda. To this end, the EU and Western Balkans ICT Dialogue Initiative, established by the Commission in cooperation with regional partners, will oversee the implementation of the Digital Agenda.
Article
Full-text available
Reasoning about graphs, and learning from graph data is a field of artificial intelligence that has recently received much attention in the machine learning areas of graph representation learning and graph neural networks. Graphs are also the underlying structures of interest in a wide range of more traditional fields ranging from logic-oriented knowledge representation and reasoning to graph kernels and statistical relational learning. In this review we outline a broad map and inventory of the field of learning and reasoning with graphs that spans the spectrum from reasoning in the form of logical deduction to learning node embeddings. To obtain a unified perspective on such a diverse landscape we introduce a simple and general semantic concept of a model that covers logic knowledge bases, graph neural networks, kernel support vector machines, and many other types of frameworks. Still at a high semantic level, we survey common strategies for model specification using probabilistic factorization and standard feature construction techniques. Based on this semantic foundation we introduce a taxonomy of reasoning tasks that casts problems ranging from transductive link prediction to asymptotic analysis of random graph models as queries of different complexities for a given model. Similarly, we express learning in different frameworks and settings in terms of a common statistical maximum likelihood principle. Overall, this review aims to provide a coherent conceptual framework that provides a basis for further theoretical analyses of respective strengths and limitations of different approaches to handling graph data, and that facilitates combination and integration of different modeling paradigms.
Article
Representing uncertain information is crucial for modeling real world domains. This has been fully recognized both in the field of Logic Programming and of Description Logics (DLs), with the introduction of probabilistic logic languages and various probabilistic extensions of DLs respectively. Several works have considered the distribution semantics as the underlying semantics of Probabilistic Logic Programming (PLP) languages and probabilistic DLs (PDLs), and have then targeted the problem of reasoning and learning in them. This paper is a survey of inference, parameter and structure learning algorithms for PLP languages and PDLs based on the distribution semantics. A few of these algorithms are also available as web applications.
Article
Despite recent advances in modern machine learning algorithms, the opaqueness of their underlying mechanisms continues to be an obstacle in adoption. To instill confidence and trust in artificial intelligence (AI) systems, explainable AI (XAI) has emerged as a response to improve modern machine learning algorithms' explainability. Inductive logic programming (ILP), a subfield of symbolic AI, plays a promising role in generating interpretable explanations because of its intuitive logic-driven framework. ILP effectively leverages abductive reasoning to generate explainable first-order clausal theories from examples and background knowledge. However, several challenges in developing methods inspired by ILP need to be addressed for their successful application in practice. For example, the existing ILP systems often have a vast solution space, and the induced solutions are very sensitive to noises and disturbances. This survey paper summarizes the recent advances in ILP and a discussion of statistical relational learning (SRL) and neural-symbolic algorithms, which offer synergistic views to ILP. Following a critical review of the recent advances, we delineate observed challenges and highlight potential avenues of further ILP-motivated research toward developing self-explanatory AI systems.
Chapter
While there has been much cross-fertilization between functional and logic programming—e.g., leading to functional models of many Prolog features—this appears to be much less the case regarding probabilistic programming, even though this is an area of mutual interest. Whereas functional programming often focuses on modeling probabilistic processes, logic programming typically focuses on modeling possible worlds. These worlds are made up of facts that each carry a probability and together give rise to a distribution semantics. The latter approach appears to be little-known in the functional programming community. This paper aims to remedy this situation by presenting a functional account of the distribution semantics of probabilistic logic programming that is based on possible worlds. We present a term monad for the monadic syntax of queries together with a natural interpretation in terms of boolean algebras. Then we explain that, because probabilities do not form a boolean algebra, they—and other interpretations in terms of commutative semirings—can only be computed after query normalisation to deterministic, decomposable negation normal form (d-DNNF). While computing the possible worlds readily gives such a normal form, it suffers from exponential blow-up. Using heuristic algorithms yields much better results in practice.
Chapter
The generation of comprehensible explanations is an essential feature of modern artificial intelligence systems. In this work, we consider probabilistic logic programming, an extension of logic programming which can be useful to model domains with relational structure and uncertainty. Essentially, a program specifies a probability distribution over possible worlds (i.e., sets of facts). The notion of explanation is typically associated with that of a world, so that one often looks for the most probable world as well as for the worlds where the query is true. Unfortunately, such explanations exhibit no causal structure. In particular, the chain of inferences required for a specific prediction (represented by a query) is not shown. In this paper, we propose a novel approach where explanations are represented as programs that are generated from a given query by a number of unfolding-like transformations. Here, the chain of inferences that proves a given query is made explicit. Furthermore, the generated explanations are minimal (i.e., contain no irrelevant information) and can be parameterized w.r.t. a specification of visible predicates, so that the user may hide uninteresting details from explanations.
Thesis
This thesis contributes to the development of a probabilistic logic programming language specific to the domain of cognitive neuroscience, coined NeuroLang, and presents some of its applications to the meta-analysis of the functional brain mapping literature. By relying on logic formalisms such as datalog, and their probabilistic extensions, we show how NeuroLang makes it possible to combine uncertain and heterogeneous data to formulate rich meta-analytic hypotheses. We encode the Neurosynth database into a NeuroLang program and formulate probabilistic logic queries resulting in term-association brain maps and coactivation brain maps similar to those obtained with existing tools, and highlighting existing brain networks. We prove the correctness of our model by using the joint probability distribution defined by the Bayesian network translation of probabilistic logic programs, showing that queries lead to the same estimations as Neurosynth. Then, we show that modeling term-to-study associations probabilistically based on term frequency-document inverse frequency (TF-IDF) measures results in better accuracy on simulated data, and a better consistency on real data, for two-term conjunctive queries on smaller sample sizes. Finally, we use NeuroLang to formulate and test concrete functional brain mapping hypotheses, reproducing past results. By solving segregation logic queries combining the Neurosynth database, topic models, and the data-driven functional atlas DiFuMo, we find supporting evidence of the existence of an heterogeneous organisation of the frontoparietal control network (FPCN), and find supporting evidence that the subregion of the fusiform gyrus called visual word form area (VWFA) is recruited within attentional tasks, on top of language-related cognitive tasks.
Chapter
In parameter learning, a partial interpretation most often contains information about only a subset of the parameters in the program. However, standard EM-based algorithms use all interpretations to learn all parameters, which significantly slows down learning. To tackle this issue, we introduce EMPLiFI, an EM-based parameter learning technique for probabilistic logic programs, that improves the efficiency of EM by exploiting the rule-based structure of logic programs. In addition, EMPLiFI enables parameter learning of multi-head annotated disjunctions in ProbLog programs, which was not yet possible in previous methods. Theoretically, we show that EMPLiFI is correct. Empirically, we compare EMPLiFI to LFI-ProbLog and EMBLEM. The results show that EMPLiFI is the most efficient in learning single-head annotated disjunctions. In learning multi-head annotated disjunctions, EMPLiFI is more accurate than EMBLEM, while LFI-ProbLog cannot handle this task.KeywordsLearning from interpretationsProbabilistic logic programmingExpectation maximization
Article
Large-scale probabilistic knowledge bases are becoming increasingly important in academia and industry. They are continuously extended with new data, powered by modern information extraction tools that associate probabilities with knowledge base facts. The state of the art to store and process such data is founded on probabilistic databases. Many systems based on probabilistic databases, however, still have certain semantic deficiencies, which limit their potential applications. We revisit the semantics of probabilistic databases, and argue that the closed-world assumption of probabilistic databases, i.e., the assumption that facts not appearing in the database have the probability zero, conflicts with the everyday use of large-scale probabilistic knowledge bases. To address this discrepancy, we propose open-world probabilistic databases, as a new probabilistic data model. In this new data model, the probabilities of unknown facts, also called open facts, can be assigned any probability value from a default probability interval. Our analysis entails that our model aligns better with many real-world tasks such as query answering, relational learning, knowledge base completion, and rule mining. We make various technical contributions. We show that the data complexity dichotomy, between polynomial time and , for evaluating unions of conjunctive queries on probabilistic databases can be lifted to our open-world model. This result is supported by an algorithm that computes the probabilities of the so-called safe queries efficiently. Based on this algorithm, we prove that evaluating safe queries is in linear time for probabilistic databases, under reasonable assumptions. This remains true in open-world probabilistic databases for a more restricted class of safe queries. We extend our data complexity analysis beyond unions of conjunctive queries, and obtain a host of complexity results for both classical and open-world probabilistic databases. We conclude our analysis with an in-depth investigation of the combined complexity in the respective models.
Article
Probabilistic Logic Programming (PLP) is a powerful paradigm for the representation of uncertain relations among objects. Recently, programs with continuous variables, also called hybrid programs, have been proposed and assigned a semantics. Hybrid programs are capable of representing real-world measurements but unfortunately the semantics proposal was imprecise so the definition did not assign a probability to all queries. In this paper, we remedy this and formally define a new semantics for hybrid programs. We prove that the semantics assigns a probability to all queries for a large class of programs.
Article
In Probabilistic Logic Programming (PLP) the most commonly studied inference task is to compute the marginal probability of a query given a program. In this paper, we consider two other important tasks in the PLP setting: the Maximum-A-Posteriori (MAP) inference task, which determines the most likely values for a subset of the random variables given evidence on other variables, and the Most Probable Explanation (MPE) task, the instance of MAP where the query variables are the complement of the evidence variables. We present a novel algorithm, included in the PITA reasoner, which tackles these tasks by representing each problem as a Binary Decision Diagram and applying a dynamic programming procedure on it. We compare our algorithm with the version of ProbLog that admits annotated disjunctions and can perform MAP and MPE inference. Experiments on several synthetic datasets show that PITA outperforms ProbLog in many cases.
Chapter
Testing algorithms across a wide range of problem instances is crucial to ensure the validity of any claim about one algorithm’s superiority over another. However, when it comes to inference algorithms for probabilistic logic programs, experimental evaluations are limited to only a few programs. Existing methods to generate random logic programs are limited to propositional programs and often impose stringent syntactic restrictions. We present a novel approach to generating random logic programs and random probabilistic logic programs using constraint programming, introducing a new constraint to control the independence structure of the underlying probability distribution. We also provide a combinatorial argument for the correctness of the model, show how the model scales with parameter values, and use the model to compare probabilistic inference algorithms across a range of synthetic problems. Our model allows inference algorithm developers to evaluate and compare the algorithms across a wide range of instances, providing a detailed picture of their (comparative) strengths and weaknesses.
Article
Probabilistic Answer Set Programming (PASP) combines rules, facts, and independent probabilistic facts. We review several combinations of logic programming and probabilities, and argue that a very useful modeling paradigm is obtained by adopting a particular semantics for PASP, where one associates a credal set with each consistent program. We examine the basic properties of PASP under this credal semantics, in particular presenting novel results on its complexity and its expressivity, and we introduce an inference algorithm to compute (upper) probabilities given a program.
Chapter
In this chapter we surveyCozman, Fabio Gagliardi languages that specify probability distributions using graphs, predicates, quantifiers, fixed-point operators, recursion, and other logical and programming constructs. Many of these languages have roots both in probabilistic logic and in the desire to enhance Bayesian networks and Markov random fields. We examine their origins and comment on various proposals up to recent developments in probabilistic programming.
Article
In this paper, we present the agent programming language POGTGolog (Partially Observable Game-Theoretic Golog), which integrates explicit agent programming in Golog with game-theoretic multi-agent planning in partially observable stochastic games. In this framework, we assume one team of cooperative agents acting under partial observability, where the agents may also have different initial belief states and not necessarily the same rewards. POGTGolog allows for specifying a partial control program in a high-level logical language, which is then completed by an interpreter in an optimal way. To this end, we define a formal semantics of POGTGolog programs in terms of Nash equilibria, and we then specify a POGTGolog interpreter that computes one of these Nash equilibria.
Article
Interacting actions – actions whose joint effect differs from the union of their individual effects – are challenging both to represent and to plan with due to their combinatorial nature. So far, there have been few attempts to provide a succinct language for representing them that can also support efficient centralized planning and distributed privacy preserving planning. In this paper we suggest an approach for representing interacting actions succinctly and show how such a domain model can be compiled into a standard single-agent planning problem as well as to privacy preserving multi-agent planning. We test the performance of our method on a number of novel domains involving interacting actions and privacy.
Chapter
Many tasks often regarded as requiring some form of intelligence to perform can be seen as instances of query answering over a semantically rich knowledge base. In this context, two of the main problems that arise are: (i) uncertainty, including both inherent uncertainty (such as events involving the weather) and uncertainty arising from lack of sufficient knowledge; and (ii) inconsistency, which involves dealing with conflicting knowledge. These unavoidable characteristics of real world knowledge often yield complex models of reasoning; assuming these models are mostly used by humans as decision-support systems, meaningful explainability of their results is a critical feature. These lecture notes are divided into two parts, one for each of these basic issues. In Part 1, we present basic probabilistic graphical models and discuss how they can be incorporated into powerful ontological languages; in Part 2, we discuss both classical inconsistency-tolerant semantics for ontological query answering based on the concept of repair and other semantics that aim towards more flexible yet principled ways to handle inconsistency. Finally, in both parts we ponder the issue of deriving different kinds of explanations that can be attached to query results.
Book
Cambridge Core - Artificial Intelligence and Natural Language Processing - Artificial Intelligence - by David L. Poole
Article
Research studies on multi-agent systems have been recently boosted by manufacturing and logistics with deep motivations like the presence of independent human deciders with individual goals, the aspiration to dominate the complexity of decision-making in large organizations, the simplicity and robustness of self-reacting distributed systems. After a survey of the multi-agent paradigm and its applications, the paper introduces the notion of hybrid holonic system to study the effect of supervision on a system whose elements negotiate and cooperate in a rule-settled environment to obtain resources for system operation. The supervisor can spur or disincentive agents by assigning/denying resources to them. A simple single-decider optimization model referred to a real application is described, and solution methodologies for optimal resource allocation fitting different scenarios (centralized, distributed, multi-agent) are discussed, identifying ranges of autonomy, quantifying rewarding and defining a negotiation protocol between the agents and the supervisor. Aim of the paper is to describe through an example a general methodology for quantitative decision-making in multi-agent organizations.
Book
Full-text available
This book investigates the application of logic to problem-solving and computer programming. It assumes no previous knowledge of these fields, and may be appropriate therefore as an introduction to logic, the theory of problem-solving, and computer programming. Please note that this is a complete copy of the book. However, chapters 10, 11, 12 and 13 have been incorrectly inserted between chapters 1 and 2.
Article
Full-text available
We study here a natural subclass of the locally stratified programs which we call acyclic. Acyclic programs enjoy several natural properties. First, they terminate for a large and natural class of general goals, so they could be used as terminating PROLOG programs. Next, their semantics can be defined in several equivalent ways. In particular we show that the immediate consequence operator of an acyclic programP has a unique fixpointM p , which coincides with the perfect model ofP, is the unique Herbrand model of the completion ofP and can be identified with the unique fixpoint of the 3-valued immediate consequence operator associated withP. The completion of an acylic programP is shown to satisfy an even stronger property: addition of a domain closure axiom results in a theory which is complete and decidable with respect to a large class of formulas including the variable-free ones. This implies thatM p is recursive. On the procedural side we show that SLS-resolution and SLDNF-resolution for acyclic programs coincide, are effective, sound and (non-floundering) complete with respect to the declarative semantics. Finally, we show that various forms of temporal reasoning, as exemplified by the so-called Yale Shooting Problem, can be naturally described by means of acyclic programs.
Article
Full-text available
A new computational framework is presented, called agent-oriented programming (AOP), which can be viewed as a specialization of object-oriented programming. The state of an agent consists of components such as beliefs, decisions, capabilities, and obligations; for this reason the state of an agent is called its mental state. The mental state of agents is described formally in an extension of standard epistemic logics: beside temporalizing the knowledge and belief operators, AOP introduces operators for obligation, decision, and capability. Agents are controlled by agent programs, which include primitives for communicating with other agents. In the spirit of speech act theory, each communication primitive is of a certain type: informing, requesting, offering, and so on. This article presents the concept of AOP, discusses the concept of mental state and its formal underpinning, defines a class of agent interpreters, and then describes in detail a specific interpreter that has been implemented.
Article
Full-text available
The theoretical foundations of the logical approach to artificial intelligence are presented. Logical languages are widely used for expressing the declarative knowledge needed in artificial intelligence systems. Symbolic logic also provides a clear semantics for knowledge representation languages and a methodology for analyzing and comparing deductive inference techniques. Several observations gained from experience with the approach are discussed. Finally, we confront some challenging problems for artificial intelligence and describe what is being done in an attempt to solve them.
Conference Paper
Full-text available
A query evaluation process for a logic data base comprising a set of clauses is described. It is essentially a Horn clause theorem prover augmented with a special inference rule for dealing with negation. This is the negation as failure inference rule whereby ~ P can be inferred if every possible proof of P fails. The chief advantage of the query evaluator described is the effeciency with which it can be implemented. Moreover, we show that the negation as failure rule only allows us to conclude negated facts that could be inferred from the axioms of the completed data base, a data base of relation definitions and equality schemas that we consider is implicitly given by the data base of clauses. We also show that when the clause data base and the queries satisfy certain constraints, which still leaves us with a data base more general than a conventional relational data base, the query evaluation process will find every answer that is a logical consequence of the completed data base.
Conference Paper
Full-text available
Most AI representations and algorithms for plan generationhave not included the concept of informationproducingactions (also called diagnostics, or tests,in the decision making literature). We present aplanning representation and algorithm that modelsinformation-producing actions and constructs plansthat exploit the information produced by those actions.We extend the buridan (Kushmerick et al.1994) probabilistic planning algorithm, adapting theaction representation to model the...
Conference Paper
Full-text available
Markov decision processes (MDPs) have recently been applied to the problem of modeling decision-theoretic planning. While traditional methods for solving MDPs are often practical for small states spaces, their effectiveness for large AI planning problems is questionable. We present an algorithm, called structured policy Iteration (SPI), that constructs optimal policies without explicit enumeration of the state space. The algorithm retains the fundamental computational steps of the commonly used modified policy iteration algorithm, but exploits the variable and prepositional independencies reflected in a temporal Bayesian network representation of MDPs. The principles behind SPI can be applied to any structured representation of stochastic actions, policies and value functions, and the algorithm itself can be used in conjunction with recent approximation methods.
Article
Full-text available
A program is bounded optimal for a given computational device for a given environment, if the expected utility of the program running on the device in the environment is at least as high as that of all other programs for the device. Bounded optimality differs from the decision-theoretic notion of rationality in that it explicitly allows for the finite computational resources of real agents. It is thus a central issue in the foundations of artificial intelligence. In this paper we consider a restricted class of agent architectures, in which a program consists of a sequence of decision procedures generated by a learning program or given a priori. For this class of agents, we give an efficient construction algorithm that generates a bounded optimal program for any episodic environment, given a set of training examples. The algorithm includes solutions to a new class of optimization problems, namely scheduling computational processes for real-time environments. This cl...
Book
Full-text available
Reasoning about knowledge—particularly the knowledge of agents who reason about the world and each other's knowledge—was once the exclusive province of philosophers and puzzle solvers. More recently, this type of reasoning has been shown to play a key role in a surprising number of contexts, from understanding conversations to the analysis of distributed computer algorithms. Reasoning About Knowledge is the first book to provide a general discussion of approaches to reasoning about knowledge and its applications to distributed systems, artificial intelligence, and game theory. It brings eight years of work by the authors into a cohesive framework for understanding and analyzing reasoning about knowledge that is intuitive, mathematically well founded, useful in practice, and widely applicable. The book is almost completely self-contained and should be accessible to readers in a variety of disciplines, including computer science, artificial intelligence, linguistics, philosophy, cognitive science, and game theory. Each chapter includes exercises and bibliographic notes. Bradford Books imprint
Article
Full-text available
Most AI representations and algorithms for plan generation have not included the concept of informationproducing actions (also called diagnostics, or tests, in the decision making literature). We present a planning representation and algorithm that models information-producing actions and constructs plans that exploit the information produced by those actions. We extend the buridan (Kushmerick et al. 1994) probabilistic planning algorithm, adapting the action representation to model the behavior of imperfect sensors, and combine it with a framework for contingent action that extends the cnlp algorithm (Peot and Smith 1992) for conditional execution. The result, c-buridan, is an implemented planner that builds plans with probabilistic information-producing actions and contingent execution. Introduction One way of coping with uncertainty in the world is to build plans that include both information-producing actions and other actions whose execution is contingent on that information. ...
Article
This paper introduces a new representation for Boolean functions, called decision lists, and shows that they are efficiently learnable from examples. More precisely, this result is established for k-;DL – the set of decision lists with conjunctive clauses of size k at each decision. Since k-DL properly includes other well-known techniques for representing Boolean functions such as k-CNF (formulae in conjunctive normal form with at most k literals per clause), k-DNF (formulae in disjunctive normal form with at most k literals per term), and decision trees of depth k, our result strictly increases the set of functions that are known to be polynomially learnable, in the sense of Valiant (1984). Our proof is constructive: we present an algorithm that can efficiently construct an element of k-DL consistent with a given set of examples, if one exists.
Chapter
This chapter presents theory, applications, and computational methods for Markov Decision Processes (MDP's). MDP's are a class of stochastic sequential decision processes in which the cost and transition functions depend only on the current state of the system and the current action. These models have been applied in a wide range of subject areas, most notably in queueing and inventory control. A sequential decision process is a model for dynamic system under the control of a decision maker. Sequential decision processes are classified according to the times (epochs) at which decisions are made, the length of the decision making horizon, the mathematical properties of the state and action spaces, and the optimality criteria. The focus of this chapter is problems in which decisions are made periodically at discrete time points. The state and action sets are either finite, countable, compact or Borel; their characteristics determine the form of the reward and transition probability functions. The optimality criteria considered in the chapter include finite and infinite horizon expected total reward, infinite horizon expected total discounted reward, and average expected reward. The main objectives in analyzing sequential decision processes in general and MDP's in particular include (1) providing an optimality equation that characterizes the supremal value of the objective function, (2) characterizing the form of an optimal policy if it exists, (3) developing efficient computational procedures for finding policies thatare optimal or close to optimal. The optimality or Bellman equation is the basic entity in MDP theory and almost all existence, characterization, and computational results are based on its analysis.
Article
Weld pool surface provides a critical information necessary for understanding arc welding processes. A novel sensing mechanism was proposed to sense the three-dimensional information of the weld pool surface. The reflection of laser stripes from the mirror-like pool surface is deformed by the shape of the weld pool surface. The pool surface is clearly shown by the reflection pattern. In order to calculate the pool surface, the geometry of the reflection must first be extracted from the image. In this paper, image processing algorithms are proposed to extract the torch and electrode boundary, pool boundary, and skeleton of the deformed reflection of the laser stripes. The image processing speed reaches 4 frames per second. Also, the accuracy of the image processing is sufficient for the computation of the pool surface. Currently, the extracted information in the image processing is being used to calculate the three-dimensional shape of the weld pool surface.
Article
An influence diagram is a graphical representation of a decision problem that is at once a formal description of a decision problem that can be treated by computers and a representation that is easily understood by decision makers who may be unskilled in the art of complex probabilistic modeling. The power of an influence diagram, both as an analysis tool and a communication tool, lies in its ability to concisely summarize the structure of a decision problem. However, when confronted with highly asymmetric problems in which particular acts or events lead to very different possibilities, many analysts prefer decision trees to influence diagrams. In this paper, we extend the definition of an influence diagram by introducing a new representation for its conditional probability distributions. This extended influence diagram representation, combining elements of the decision tree and influence diagram representations, allows one to clearly and efficiently represent asymmetric decision problems and provides an attractive alternative to both the decision tree and conventional influence diagram representations.
Article
Abstract We present a new and more symmetric version of the circumscription method of nonmonotonic reasoning rst described in (McCarthy 1980) and some applications to formalizing common,sense knowledge. The applications in this paper are mostly based on minimizing the abnormality of dieren t aspects of various entities. Included are nonmonotonic treatments of is-a hierarchies, the unique names hypothesis, and the frame problem. The new circumscription may be called formula circumscription to distinguish it from the previously dened domain circumscription and predicate circumscription. A still more general formalism called prioritized circumscription is briey explored.
Book
Pages out of order, but all there.
Book
"This is the classic work upon which modern-day game theory is based. What began more than sixty years ago as a modest proposal that a mathematician and an economist write a short paper together blossomed, in 1944, when Princeton University Press published Theory of Games and Economic Behavior. In it, John von Neumann and Oskar Morgenstern conceived a groundbreaking mathematical theory of economic and social organization, based on a theory of games of strategy. Not only would this revolutionize economics, but the entirely new field of scientific inquiry it yielded--game theory--has since been widely used to analyze a host of real-world phenomena from arms races to optimal policy choices of presidential candidates, from vaccination policy to major league baseball salary negotiations. And it is today established throughout both the social sciences and a wide range of other sciences. This sixtieth anniversary edition includes not only the original text but also an introduction by Harold Kuhn, an afterword by Ariel Rubinstein, and reviews and articles on the book that appeared at the time of its original publication in the New York Times, tthe American Economic Review, and a variety of other publications. Together, these writings provide readers a matchless opportunity to more fully appreciate a work whose influence will yet resound for generations to come.
Article
Reasoning about change requires predicting how long a proposition, having become true, will continue to be so. Lacking perfect knowledge, an agent may be constrained to believe that a proposition persists indefinitely simply because there is no way for the agent to infer a contravening proposition with certainty. In this paper, we describe a model of causal reasoning that accounts for knowledge concerning cause-and-effect relationships and knowledge concerning the tendency for propositions to persist or not as a function of time passing. Our model has a natural encoding in the form of a network representation for probabilistic models. We consider the computational properties of our model by reviewing recent advances in computing the consequences of models encoded in this network representation. Finally, we discuss how our probabilistic model addresses certain classical problems in temporal reasoning (e. g., the frame and qualification problems).
Article
A computer program capable of acting intelligently in the world must have a general representation of the world in terms of which its inputs are interpreted. Designing such a program requires commitments about what knowledge is and how it is obtained. Thus, some of the major traditional problems of philosophy arise in artificial intelligence.
Article
A new architecture for controlling mobile robots is described. Layers of control system are built to let the robot operate at increasing levels of competence. Layers are made up of asynchronous modules that communicate over low-bandwidth channels. Each module is an instance of a fairly simple computational machine. Higher-level layers can subsume the roles of lower levels by suppressing their outputs. However, lower levels continue to function as higher levels are added. The result is a robust and flexible robot control system. The system has been used to control a mobile robot wandering around unconstrained laboratory areas and computer machine rooms. Eventually it is intended to control a robot that wanders the office areas of our laboratory, building maps of its surroundings using an onboard arm to perform simple tasks.
Article
This paper presents a simple framework for Horn-clause abduction, with probabilities associated with hypotheses. The framework incorporates assumptions about the rule base and independence assumptions amongst hypotheses. It is shown how any probabilistic knowledge representable in a discrete Bayesian belief network can be represented in this framework. The main contribution is in finding a relationship between logical and probabilistic notions of evidential reasoning. This provides a useful representation language in its own right, providing a compromise between heuristic and epistemic adequacy. It also shows how Bayesian networks can be extended beyond a propositional language. This paper also shows how a language with only (unconditionally) independent hypotheses can represent any probabilistic knowledge, and argues that it is better to invent new hypotheses to explain dependence rather than having to worry about dependence in the language.
Article
Intelligent agents are systems that have a complex, ongoing interaction with an environment that is dynamic and imperfectly predictable. Agents are typically difficult to program because the correctness of a program depends on the details of how the agent is situated in its environment. In this paper, we present a methodology for the design of situated agents that is based on situated-automata theory. This approach allows designers to describe the informational content of an agent's computational states in a semantically rigorous way without requiring a commitment to conventional run-time symbolic processing. We start by outlining this situated view of representation, then show how it contributes to design methodologies for building systems that track perceptual conditions and take purposeful actions in their environments.
Article
This paper investigates the complexity of finding max-min strategies for finite two-person zero-sum games in the extensive form. The problem of determining whether a player with imperfect recall can guarantee himself a certain payoff is shown to be NP-hard. When both players have imperfect recall, this problem is even harder. Moreover, the max-min behavior strategy of such a player may use irrational numbers. Thus, for games with imperfect recall, computing the max-min strategy or the value of the game is a hard problem. For a game with perfect recall, we present an algorithm for computing a max-min behavior strategy, which runs in time polynomial in the size of the game tree. Journal of Economic Literature Classification Number: 026.
Article
Hybrid dynamic systems are systems consisting of a nontrivial mixture of discrete and continuous components, such as a controller realized by a combination of digital and analog circuits, a robot composed of a digital controller and a physical plant, or a robotic system consisting of a computer-controlled robot coupled to a continuous environment. Hybrid dynamic systems are more general then traditional real-time systems. The former can be composed of continuous subsystems in addition to discrete and event-controlled components.In this paper, we develop a semantic model, constraint nets (CN), for hybrid systems. CN captures the most general structure of dynamic systems so that systems with discrete and continuous time, discrete and continuous variables, and asynchronous as well as synchronous event structures, can be modeled in a unitary framework. Using aggregation operators, a system can be modeled hierarchically in CN; therefore, the dynamics of the environment as well as the dynamics of the plant and the dynamics of the controller can be modeled individually and then integrated. Based on abstract algebra and topology, CN supports multiple levels of abstraction, so that a system can be analyzed at different levels of detail. CN also provides a rigorous formal programming semantics for the design of hybrid real-time embedded systems.
Article
We consider two approaches to giving semantics to first-order logics of probability. The first approach puts a probability on the domain, and is appropriate for giving semantics to formulas involving statistical information such as “The probability that a randomly chosen bird flies is greater than 0.9.” The second approach puts a probability on possible worlds, and is appropriate for giving semantics to formulas describing degrees of belief such as “The probability that Tweety (a particular bird) flies is greater than 0.9.” We show that the two approaches can be easily combined, allowing us to reason in a straightforward way about statistical information and degrees of belief. We then consider axiomatizing these logics. In general, it can be shown that no complete axiomatization is possible. We provide axiom systems that are sound and complete in cases where a complete axiomatization is possible, showing that they do allow us to capture a great deal of interesting reasoning about probability.
Article
We introduce three-valued extensions of major nonmonotonic formalisms and we prove that the recently proposed well-founded semantics of logic programs is equivalent, for arbitrary logic programs, to three-valued forms of McCarthy's circumscription, Reiter's closed world assumption, Moore's autoepistemic logic and Reiter's default theory. This result not only provides a further justification of the well-founded semantics as a natural extension of the perfect model semantics from the class of stratified programs to the class of all logic programs, but it also establishes the class of all logic programs as a large class of theories, for which natural forms of all four nonmonotonic formalisms coincide. It also paves the way for using efficient computation methods, developed for logic programming, as inference mechanisms for nonmonotonic reasoning.
Article
We present a new and more symmetric version of the circumscription method of non-monotonic reasoning first described by McCarthy [9] and some applications to formalizing common-sense knowledge. The applications in this paper are mostly based on minimizing the abnormality of different aspects of various entities. Included are non-monotonic treatments of “is-a” hierarchies, the unique names hypothesis, and the frame problem. The new circumscription may be called formula circumscription to distinguish it from the previously defined domain circumscription and predicate circumscription. A still more general formalism called prioritized circumscription is briefly explored.
Article
This paper integrates logical and probabilistic approaches to the representation of planning problems by developing a first-order logic of time, chance, and action. We start by making explicit and precise commonsense notions about time, chance, and action central to the planning problem. We then develop a logic, the semantics of which incorporates these intuitive properties. The logical language integrates both modal and probabilistic constructs and allows quantification over time points, probability values, and domain individuals. Probability is treated as a sentential operator in the language, so it can be arbitrarily nested and combined with other logical operators. The language can represent the chance that facts hold and events occur at various times. It can represent the chance that actions and other events affect the future. The model of action distinguishes between action feasibility, executability, and effects. We present a proof theory for the logic and show how the logic can be used to describe actions in such a way that the action descriptions can be composed to infer properties of plans via the proof theory.
Conference Paper
In this paper, we show a new approach for reason- ing about time and probability that combines a for- mal declarative language with a graph representation of systems of random variables for making inferences. First, we provide a continuous-time logic for express- ing knowledge about time and probability. Then, we introduce the time net, a kind of Bayesian network for supporting inference with statements in the logic. Time nets encode the probability of facts and events over time. We provide a simulation algorithm to com- pute probabilities for answering queries about a time net. Finally, we consider an incremental probabilistic temporal database based on the logic and time nets to support temporal reasoning and planning applica- tions. The result is an approach that is semantically well-founded, expressive, and practical.
Article
We describe a representation and set of inference techniques for the dynamic construction of probabilistic and decision-theoretic models expressed as networks. In contrast to probabilistic reasoning schemes that rely on fixed models, we develop a representation that implicitly encodes a large number of possible model structures. Based on a particular query and state of information, the system constructs a customized belief net for that particular situation. We develop an interpretation of the network construction process in terms of the implicit networks encoded in the database. A companion method for constructing belief networks with decisions and values (decision networks) is also developed that uses sensitivity analysis to focus the model building process. Finally, we discuss some issues of control of model construction and describe examples of constructing networks.
Article
This paper proposes a new logic programming language called GOLOG whose interpreter automatically maintains an explicit representation of the dynamic world being modeled, on the basis of user supplied axioms about the preconditions and effects of actions and the initial state of the world. This allows programs to reason about the state of the world and consider the effects of various possible courses of action before committing to a particular behavior. The net effect is that programs may be written at a much higher level of abstraction than is usually possible. The language appears well suited for applications in high level control of robots and industrial processes, intelligent software agents, discrete event simulation, etc. It is based on a formal theory of action specified in an extended version of the situation calculus. A prototype implementation in Prolog has been developed.
Article
The paper is concerned with the succinct axiomatization and efficient deduction of non-change, within McCarthy and Hayes' Situation Calculus. The idea behind the proposed approach is this: suppose that in a room containing a man, a robot and a cat as the only potential agents, the only action taken by the man within a certain time interval is to walk from one place to another, while the robot's only actions are to pick up a box containing the (inactive) cat and carry it from its initial place to another. We wish to prove that a certain object (such as the cat, or the doormat) did not change color. We reason that the only way it could have changed color is for the man or the robot to have painted or dyed it. But since these are not among the actions which actually occurred, the color of the object is unchanged. Thus we need no frame axioms to the effect that walking and carrying leave colors unchanged (which is in general false in multi-agent worlds), and no default schema that properties change only when we can prove they do (which is in general false in incompletely known worlds). Instead we use explanation-closure axioms specifying all primitive actions which can produce a given type of change within the setting of interest. A method similar to this has been proposed by Andrew Haas for single-agent, serial worlds. The contribution of the present paper lies in showing (1) that such methods do indeed encode non-change succinctly, (2) are independently motivated, (3) can be used to justify highly efficient methods of inferring non-change, specifically the "sleeping dog" strategy of STRIPS, and (4) can be extended to simple multiagent worlds with concurrent actions. An ultimate limitation may lie in the lack of a uniform strategy for deciding what fluents can be affected by what agents in a given domain. In this respect probabilistic methods appear promising.