Book

Non-Axiomatic Logic: A Model of Intelligent Reasoning

Authors:

Abstract

This book provides a systematic and comprehensive description of Non-Axiomatic Logic, which is the result of the author's research for about three decades. Non-Axiomatic Logic is designed to provide a uniform logical foundation for Artificial Intelligence, as well as an abstract description of the “laws of thought” followed by the human mind. Different from “mathematical” logic, where the focus is the regularity required when demonstrating mathematical conclusions, Non-Axiomatic Logic is an attempt to return to the original aim of logic, that is, to formulate the regularity in actual human thinking. To achieve this goal, the logic is designed under the assumption that the system has insufficient knowledge and resources with respect to the problems to be solved, so that the “logical conclusions” are only valid with respect to the available knowledge and resources. Reasoning processes according to this logic covers cognitive functions like learning, planning, decision making, problem solving, etc. This book is written for researchers and students in Artificial Intelligence and Cognitive Science, and can be used as a textbook for courses at graduate level, or upper-level undergraduate, on Non-Axiomatic Logic. © 2013 by World Scientific Publishing Co. Pte. Ltd. All rights reserved.
... or operations, each occurring at a certain discrete time-step. Here, events do not encode an entire state, just certain parts of it, such as temperature information coming from a sensory device, encoded by a composition of terms/IDs, called "Compound Term" (see [18]). Now, temporal patterns can become building blocks of hypotheses, which should capture useful regularities in the experience of a system. ...
... In particular, four aspects will be described at a relatively high level of abstraction, namely: evidence, budget, concepts, and bags. Detailed explanations of each of these can be found here [8,17,18]. ...
... For detailed formulas, see [8,18]. ...
Chapter
Full-text available
A novel method of Goal-directed Procedure Learning is presented that overcomes some of the drawbacks of the traditional approaches to planning and reinforcement learning. The necessary principles for acquiring goal-dependent behaviors, and the motivations behind this approach are explained. A concrete implementation exists in a Non-Axiomatic Reasoning System, OpenNARS, although we believe the findings may be generally applicable to other AGI systems.
... Consciousness will be introduced initially as a cognitive function that can be realized in a computer system. I will then brie°y summarize the design of our model, NARS (which has been speci¯ed in Wang [2006Wang [ , 2013 and explained in our previous publications), with a focus on the distinction between conscious and unconscious processes. Finally, I will address the phenomenal aspect of consciousness, as well as the related theoretical issues. ...
... In this section, I introduce the functional aspects of consciousness in the AI system NARS (Non-Axiomatic Reasoning System), as implemented in the open-source software OpenNARS (at http://opennars.org/). Since NARS has been described in monographs Wang [2006Wang [ , 2013 and a large number of other publications, in this paper, I only summarize its basic ideas and features, and refer to the existing publications for details. ...
... The applicable inference rules will be triggered and derive new tasks. For the details of the inference rules, see Wang [2013]. The results of inference also include commands for certain operations to be executed, either on the (external) environment via an I/O channel, or on the (internal) memory. ...
Article
This paper describes the consciousness-related aspects of the AGI system NARS, discusses the implications of this design and compares it with other relevant theories and designs. It is argued that the function of consciousness is self-awareness and self-control, and the phenomenal aspect of consciousness is the first-person perspective of the same process for which the functional aspect is the third-person perspective.
... This paper is not an attempt to produce such a definition. Instead it is meant to show an implementation of emotion within a specific cognitive architecture, NARS (Non-Axiomatic Reasoning System) [8,9]. ...
... This is captured by the acronym AIKR; Assumption of Insufficient Knowledge and Resources. AIKR and the NARS system are discussed in many publications, including two books [8,9]. This section will only cover the aspects of NARS most relevant to the current discussion. ...
... This section will only cover the aspects of NARS most relevant to the current discussion. NARS makes use of a formal language, Narsese, for its knowledge representation, and this language is defined using a formal grammar in [9]. The system's logic is developed from the traditional "term logic". ...
Chapter
Full-text available
Emotions play a crucial role in different cognitive functions, such as action selection and decision-making processes. This paper describes a new appraisal model for the emotion mechanism of NARS, an AGI system. Different from the previous appraisal model where emotions are triggered by the specific context, the new appraisal evaluates the relations between the system and its goals, based on a new set of criteria, including desirability, belief, and anticipation. Our work focuses on the functions of emotions and how emotional reactions could help NARS to improve its various cognitive capacities.
... In the following we introduce a preliminary design, as a first step in this direction. The following design is an addition to NARS (Non-Axiomatic Reasoning System), which is an AGI system that has been described in a large number of publications, including [25,27]. Limited by paper length, here we only describe the components of NARS that are directly related to perception. ...
... NARS uses the formal language Narsese for both internal representation and external communication, and its grammar is given in [27]. Narsese is a term-based language, in which each term is the identifier of a concept within the system. ...
... Terms in NARS can be obtained directly from the system's experience, or constructed by the system from the existing terms using composing/decomposing rules [27]. For the current discussion, sensory terms are produced by the sensors, while perceptual terms are constructed by the system from the existing sensory or perceptual terms. ...
Chapter
Full-text available
This paper argues that according to the relevant discoveries of cognitive science, in AGI systems perception should be subjective, active, and unified with other processes. This treatment of perception is fundamentally different from the mainstream approaches in computer vision and machine learning, where perception is taken to be objective, passive, and modular. The conceptual design of perception in the AGI system NARS is introduced, where the three features are realized altogether. Some preliminary testing cases are used to show the features of this novel approach.
... This idea resembles Simon's "bounded rationality" and some other ideas [Simon, 1957, Good, 1983, Cherniak, 1986, Anderson, 1990, Russell and Wefald, 1991, Gigerenzer and Selten, 2002. What makes this new approach different is that it is instantiated by a formal logic designed to completely accept AIKR, and the logic has been mostly implemented in a computer system [Wang, 1995, Wang, 2013. ...
... NAL (Non-Axiomatic Logic) is the logic part of NARS (Non-Axiomatic Reasoning System), an AGI project aimed at a thinking machine that is fully based on AIKR (Assumption of Insufficient Knowledge and Resources) . Since the details of NAL has been described in many publications, especially [Wang, 2013], in this chapter it is not fully specified, but used as an example of a new type of logic. ...
... In the current design [Wang, 2013], NAL is introduced in 9 layers, NAL-1 to NAL-9. Each layer extends the grammar rules, semantics, and inference rules to increase the expressive and inferential power of the logic, while respecting AIKR. ...
Chapter
Logic should return its focus to valid reasoning in real-world situations. Since classical logic only covers valid reasoning in a highly idealized situation, there is a demand for a new logic for everyday reasoning that is based on more realistic assumptions, while still keeps the general, formal, and normative nature of logic. NAL (Non-Axiomatic Logic) is built for this purpose, which is based on the assumption that the reasoner has insufficient knowledge and resources with respect to the reasoning tasks to be carried out. In this situation, the notion of validity has to be re-established, and the grammar rules and inference rules of the logic need to be designed accordingly. Consequently, NAL has features very different from classical logic and other non-classical logics, and it provides a coherent solution to many problems in logic, artificial intelligence, and cognitive science.
... Distributed Non-Axiomatic Reasoning System (DNARS) is a novel architecture for reasoning which can be employed for the intelligent agent development. It extends the Non-Axiomatic Logic Reasoning [9][10] [11], by introducing the capability for distributed processing which allows large amounts of data to be processed. The main advantage of DNARS, when compared to all other existing reasoning and cognitive architectures, is that it leverages state-of-the-art techniques for large-scale, distributed data management and processing. ...
... Instead of the popular Belief-Desire-Intention (BDI) model for the development of intelligent agents, DNARS is based on the so-called non-axiomatic logic, formalism developed in the domain of artificial general intelligence [11]. The term of non-axiomatic means that the logic suitable for the development of systems that operate in conditions of insufficient knowledge and resources. ...
... II. DNARS IMPLEMENTATION Non-axiomatic logic is a formalism for the specification of the system for reasoning within artificial general intelligence [9][10] [11]. NAL includes grammar i.e. alphabet, a set of rules for the execution, and semantic theory. ...
Conference Paper
Full-text available
Development of Agent-based languages is the natural extension of the research in the area of Agent-based systems. This paper deals with adding the support for the Distributed Non-Axiomatic Reasoning into the ALAS agent-oriented language. This support has been added into the Siebog agent middleware. Siebog is a distributed multiagent system based on the modern web and enterprise standards. Siebog has built in support for reasoning based on the Distributed Non-Axiomatic Reasoning System (DNARS). DNARS is a reasoning system based on non-axiomatic logic (NAL) and general principles of development of non-axiomatic reasoning systems. So far, the DNARS-enabled agents could be written only in Java programming language. To solve the problem of interoperability within different Siebog platforms, an agent-oriented domain-specific language (AODSL) ALAS has been developed. The main purpose of the ALAS is to support agent mobility and implementation and execution of agents on heterogenous platforms. This paper describes the extended version of the ALAS language which supports DNARS. The conversion process of ALAS code to Java code is also described in this paper. The latest version of ALAS has been developed by using textX framework and Arpeggio parser. I. INTRODUCTION This paper deals with the improvement of the support for the Distributed Non-Axiomatic Reasoning System (DNARS [1]) by extending the ALAS language [2][3][4], to support the DNARS. The ALAS language is the agent-oriented language built for the Siebog agent middleware [5][6][7]. Siebog middleware is an agent middleware which supports both server-side [5][7], (application server-based) and client-side [8], (browser-based) agents. Distributed Non-Axiomatic Reasoning System (DNARS) is a novel architecture for reasoning which can be employed for the intelligent agent development. It extends the Non-Axiomatic Logic Reasoning [9][10][11], by introducing the capability for distributed processing which allows large amounts of data to be processed. The main advantage of DNARS, when compared to all other existing reasoning and cognitive architectures, is that it leverages state-of-the-art techniques for large-scale, distributed data management and processing. This approach allows DNARS to operate on top of very large knowledge bases, while serving large numbers of external clients with real-time responsiveness.
... The design of NARS has been described in two research monographs [19,20] and more than 60 papers, most of which can be downloaded at https://cis.temple.edu/~pwang/papers.html. In 2008, the project became open source, and since then has had more than 20 releases. ...
... The table includes three cases involving the same group of statements, where "robin → bird" expresses "Robin is a type of bird", "bird → [ f lyable]" expresses "Bird can fly", and "robin → [ f lyable]" expresses "Robin can fly". For complete specification of Narsese grammar, see [20]. ...
... Deduction in NARS is based on the transitivity of the inheritance relation, that is, "if A is a type of B, and B is a type of C, then A is a type of C." This rule looks straightforward, except that since the two premises are true to differing degrees, so is the conclusion. Therefore, a truth-value function is part of the rule, which uses the truth-values of the premises to calculate the truth-value of the conclusion [20]. ...
Article
Full-text available
In the current discussions about "artificial intelligence" (AI) and "singularity", both labels are used with several very different senses, and the confusion among these senses is the root of many disagreements. Similarly, although "artificial general intelligence" (AGI) has become a widely used term in the related discussions, many people are not really familiar with this research, including its aim and status. We analyze these notions, and introduce the results of our own AGI research. Our main conclusions are that: (1) it is possible to build a computer system that follows the same laws of thought and shows similar properties as the human mind, but, since such an AGI will have neither a human body nor human experience, it will not behave exactly like a human, nor will it be "smarter than a human" on all tasks; and (2) since the development of an AGI requires a reasonably good understanding of the general mechanism of intelligence, the system's behaviors will still be understandable and predictable in principle. Therefore, the success of AGI will not necessarily lead to a singularity beyond which the future becomes completely incomprehensible and uncontrollable.
... NARS (non-axiomatic reasoning system) is an AGI-designed framework of a reasoning system. The project has been described in many publications, including two books (Wang, 2006(Wang, , 2013, so it is only briefly summarized here. ...
... As a reasoning system, NARS uses a formal language called "Narsese" for knowledge representation, which is defined by a formal grammar given in the study by Wang (2013). To fully specify and explain this language is beyond the scope of this article, so in the following, only the directly relevant part is introduced informally and described briefly. ...
... For example, implication statement "E1 ⇒ E2" has three temporal versions, corresponding to the above three temporal orders, respectively 3 : 1 This treatment is similar to the set-theoretic definition of "relation" as set of tuples, where it is also possible to define what is related to a given element in the relation as a set. For detailed discussions, see the studies by Wang (2006Wang ( , 2013. 2 The definitions of disjunction and conjunction in propositional logic do not require the components to be related in content, which lead to various issues under AIKR. In NARS, such a compound is formed only when the components are related semantically, temporally, or spatially. ...
Article
Full-text available
This article describes and discusses the self-related mechanisms of a general-purpose intelligent system, NARS. This system is designed to be adaptive and to work with insufficient knowledge and resources. The system’s various cognitive functions are uniformly carried out by a central reasoning-learning process following a “non-axiomatic” logic. This logic captures the regularities of human empirical reasoning, where all beliefs are revisable according to evidence, and the meaning of concepts are grounded in the system’s experience. NARS perceives its internal environment basically in the same way as how it perceives its external environment although the sensors involved are completely different. Consequently, its self-knowledge is mostly acquired and constructive, while being incomplete and subjective. Similarly, self-control in NARS is realized using mental operations, which supplement and adjust the automatic inference control routine. It is argued that a general-purpose intelligent system needs the notion of a “self,” and the related knowledge and functions are developed gradually according to the system’s experience. Such a mechanism has been implemented in NARS in a preliminary form.
... Learning is necessary for goal achievement in a changing, novel environment. All learning machines, whether natural or artificial, are limited by the time and energy they have available; the outermost constraint on any learning mechanism is the assumption of insufficient knowledge and resources (AIKR) [27]. However, there is a large number of ways to interpret these constraints when implementing learning mechanisms, and thus there are numerous dimenisons along which any learning ability may vary. ...
... For an implemented system, neither memory nor computation speed is infinite [27]. This means all learners must make choices on what knowledge can and should be retained (cf. ...
... In addition to selective forgetting, AERA's rewriting rules reduces redundancies and storage requirements through increased generality whereby values are replaced with variables coupled with ranges [15]. In NARS, forgetting has two related senses: (1) relative forgetting: decrease priority to save time, (2) absolute forgetting: remove from memory to save space and time [27]. ...
Chapter
Full-text available
An important feature of human learning is the ability to continuously accept new information and unify it with existing knowledge, a process that proceeds largely automatically and without catastrophic side-effects. A generally intelligent machine (AGI) should be able to learn a wide range of tasks in a variety of environments. Knowledge acquisition in partially-known and dynamic task-environments cannot happen all-at-once, and AGI-aspiring systems must thus be capable of cumulative learning: efficiently making use of existing knowledge while learning new things, increasing the scope of ability and knowledge incrementally—without catastrophic forgetting or damaging existing skills. Many aspects of such learning have been addressed in artificial intelligence (AI) research, but relatively few examples of cumulative learning have been demonstrated to date and no generally accepted explicit definition exists of this category of learning. Here we provide a general definition of cumulative learning and describe how it relates to other concepts frequently used in the AI literature.
... Learning is necessary for goal achievement in a changing, novel environment. All learning machines, whether natural or artificial, are limited by the time and energy they have available; the outermost constraint on any learning mechanism is the assumption of insufficient knowledge and resources (AIKR) [27]. However, there is a large number of ways to interpret these constraints when implementing learning mechanisms, and thus there are numerous dimenisons along which any learning ability may vary. ...
... For an implemented system, neither memory nor computation speed is infinite [27]. This means all learners must make choices on what knowledge can and should be retained (cf. ...
... In addition to selective forgetting, AERA's rewriting rules reduces redundancies and storage requirements through increased generality whereby values are replaced with variables coupled with ranges [15]. In NARS, forgetting has two related senses: (1) relative forgetting: decrease priority to save time, (2) absolute forgetting: remove from memory to save space and time [27]. ...
Conference Paper
An important feature of human learning is the ability to continuously accept new information and unify it with existing knowledge, a process that proceeds largely automatically and without catastrophic side-effects. A generally intelligent machine (AGI) should be able to learn a wide range of tasks in a variety of environments. Knowledge acquisition in partially-known and dynamic task-environments cannot happen all-at-once, and AGI-aspiring systems must thus be capable of cumulative learning: efficiently making use of existing knowledge while learning new things, increasing the scope of ability and knowledge incrementally-without catastrophic forgetting or damaging existing skills. Many aspects of such learning have been addressed in artificial intelligence (AI) research, but relatively few examples of cumulative learning have been demonstrated to date and no generally accepted explicit definition exists of this category of learning. Here we provide a general definition of cumulative learning and describe how it relates to other concepts frequently used in the AI literature.
... The above working definition has many implications. Here I briefly introduce the guidance it provides in the design of NARS (Non-Axiomatic Reasoning System) (Wang, 1995(Wang, , 2006b(Wang, , 2013. ...
... NAL also shares common features with set theory, propositional logic, predicate logic, non-monotonic logic, and fuzzy logic, but still differs from them fundamentally, as none of them is designed for adaptation under AIKR. For comparisons between NAL and those formal models, see Wang (2006bWang ( , 2013. ...
... The approach taken by NAL is inspired by logic programming (Kowalski, 1979), where goals and actions are expressed by statements with special interpretations. Consequently, various cognitive functions, including learning, planning, searching, categorizing, observing, acting, communicating, etc., become different aspects of the same underlying process in NARS (Wang, 2006b(Wang, , 2013. These processes are all formulated according to the adaptation under AIKR principle, and only try to produce the optimal solution with respect to the currently available knowledge and resources, so quite different from how each of them is specified and carried out in other AI techniques. ...
Article
Full-text available
This article systematically analyzes the problem of defining “artificial intelligence.” It starts by pointing out that a definition influences the path of the research, then establishes four criteria of a good working definition of a notion: being similar to its common usage, drawing a sharp boundary, leading to fruitful research, and as simple as possible. According to these criteria, the representative definitions in the field are analyzed. A new definition is proposed, according to it intelligence means “adaptation with insufficient knowledge and resources.” The implications of this definition are discussed, and it is compared with the other definitions. It is claimed that this definition sheds light on the solution of many existing problems and sets a sound foundation for the field.
... There has been some recent progress in this area [10], but this does not yet match the results of back propagation and ANNs in general [10]. This paper presents an approach to combine the benefits of SNNs with a Non-Axiomatic Logic (NAL) [8] and a Non-Axiomatic Reasoning System philosophy (NARS). We will show that a symbolic logic (NAL) can assume the role of 'training' a SNN via an inference mechanism and form the framework of an attentional control mechanism for a reasoning system whilst operating under the Assumption of Insufficient Knowledge and Resources (AIKR). ...
... ALANN is a concrete implementation of a NARS and adheres to the underlying requirements of; a unified principle of cognition which adapts to its environment, Finiteworks with insufficient knowledge and resource (fixed), Real-time: responds in real time to new information and is Open to input from any domain (expressible in the input language). NAL is a syllogistic logic and generally requires two premises to have a common term in order to apply an inference rule (there are exceptions to this such as temporal or structural inference) [8]. In principle a premise in NAL is a <term copula term> where term is a symbol or a premise and copula is a link, with a truth value, between terms. ...
... In principle a premise in NAL is a <term copula term> where term is a symbol or a premise and copula is a link, with a truth value, between terms. For the purpose of this discussion we will limit the range of NAL copulas and operators, but in NAL the number of copulas and operators is much larger [8]. In this paper we will assume a term is a symbol composed of letters and digits and a copula is an inheritance relation → [8] Truth values in NAL are evidence based (w+, w-), where w+ represents positive evidence and w-represents negative evidence, or alternatively as confidence c and frequency f tuple, where = w + w + +w − and = + + − + + + − , where k is a global personality parameter that indicates a global evidential horizon [8]. ...
Conference Paper
Full-text available
Adaptive Logic and Neural network (ALANN): A neuro-symbolic approach to, event driven, attentional control of a NARS system. A spiking neural network (SNN) model is used as the control mechanism in conjunction with the Non-Axiomatic Logic (NAL). An inference engine is used to create and adjust links and associated link strengths and provide activation spreading under contextual control.
... Some of the AgentSpeak concepts such as the difference between private and public services, had influence on ALAS development although it has never been put into practical use. Development of the Jason (2017) interpreter contributes to popularity of the AgentSpeak (Bordini et al. 2009;Wang 2013). ...
... Inspired by the languages described in this section and using the concepts on which they are based, ALAS has been developed. However, the basic difference between ALAS and AOPLs based on BDI architecture is that instead of BDI intelligent agents, ALAS allows development of intelligent agents based on NAL formalism developed in Artificial General Intelligence domain (Wang 2013;Wang 2006;Wang and Awan 2011). ...
... Truth-value represents the degree of consistency of any belief. Truth-value is defined with two values, frequency (f) and confidence (c) (Wang 2013). ...
Article
Full-text available
This paper presents an extension of the agent-oriented domain-specific language ALAS to support Distributed Non-Axiomatic Reasoning. ALAS is intended for the development of specific kind of intelligent agents. It is designed to support the Siebog Multi-Agent System (MAS) and implementation of the Siebog intelligent agents. Siebog is a distributed MAS based on the modern web and enterprise standards. Siebog offers support to reasoning based on the Distributed Non-Axiomatic Reasoning System (DNARS). DNARS is a reasoning system based on the Non-Axiomatic Logic (NAL). So far, DNARS-enabled agents could be written only in Java programming language. To solve the problem of interoperability and agent mobility within Siebog platforms, the ALAS language has been developed. The goal of such language is to allow programmers to develop intelligent agents easier by using domain specific constructs. The conversion process of ALAS code to Java code is also described in this paper.
... ONA is a NARS as described by Non-Axiomatic Reasoning System theory [18]. For a system to be classified as an instance of a NARS it needs to work under the Assumption of Insufficient Knowledge and Resource (AIKR). ...
... What all Non-Axiomatic Reasoning Systems have in common is the use of the Non-Axiomatic Logic (NAL) [18], a term logic with evidence based truth values, which allows the systems to deal with uncertainty. Due to the compositional nature of NAL, these systems usually have a concept centric memory structure, which exploits subterm relationships for control purposes. ...
... Semantic Inference: All declarative reasoning using NAL layers 1-6 occurs here as described in [18], meaning no temporal and procedural aspects are processed here. As inheritance can be seen as a way to describe objects in a universe of discourse [17], the related inference helps the reasoner to categorize events, and to refine these categorizations with further experience. ...
Chapter
Full-text available
A pragmatic design for a general purpose reasoner incorporating the Non-Axiomatic Logic (NAL) and Non-Axiomatic Reasoning System (NARS) theory. The architecture and attentional control differ in many respects to the OpenNARS implementation. Key changes include; an event driven control process, separation of sensorimotor from semantic inference and a different handling of resource constraints.
... ONA is a NARS as described by Non-Axiomatic Reasoning System theory [18]. For a system to be classified as an instance of a NARS it needs to work under the Assumption of Insufficient Knowledge and Resource (AIKR). ...
... What all Non-Axiomatic Reasoning Systems have in common is the use of the Non-Axiomatic Logic (NAL) [18], a term logic with evidence based truth values, which allows the systems to deal with uncertainty. Due to the compositional nature of NAL, these systems usually have a concept centric memory structure, which exploits subterm relationships for control purposes. ...
... Semantic Inference: All declarative reasoning using NAL layers 1-6 occurs here as described in [18], meaning no temporal and procedural aspects are processed here. As inheritance can be seen as a way to describe objects in a universe of discourse [17], the related inference helps the reasoner to categorize events, and to refine these categorizations with further experience. ...
Conference Paper
A pragmatic design for a general purpose reasoner incorporating the Non-Axiomatic Logic (NAL) and Non-Axiomatic Reasoning System (NARS) theory. The architecture and attentional control differ in many respects to the OpenNARS implementation. Key changes include ; an event driven control process, separation of sensorimotor from semantic inference and a different handling of resource constraints.
... The idea we present in this paper is to combine the Jason reasoning cycle with a Non-Axiomatic Reasoning System (NARS) [5], [6] to develop multi-agent systems that are able to reason, deliberate and plan when information about plans to be executed and goals to be pursued is missing or incomplete. ...
... Preconditions are associated with each plan and are used to select the best plan, according to the agent's perception. NARS uses Non-Axiomatic Logic (NAL) inference rules, which are well defined in [6] and allow an artificial agent to plan even under the assumption of insufficient knowledge and resources. By combining these approaches, it is possible to increase the performance of the overall system in terms of finding a new strategy that combines these two aspects. ...
... NARS is an Artifial General Intelligence (AGI) [6] System developed in the context of an inferential system. It uses Non-Axiomatic Logic, a term logic that extends Aristotelian logic and its syllogistic forms to include compositions of terms as well as a notion of indeterminacy. ...
Preprint
This work explore the possibility to combine the Jason reasoning cycle with a Non-Axiomatic Reasoning System (NARS) to develop multi-agent systems that are able to reason, deliberate and plan when information about plans to be executed and goals to be pursued is missing or incomplete. The contribution of this work is a method for BDI agents to create high-level plans using an AGI (Artificial General Intelligence) system based on non-axiomatic logic.
... The multi-class and multi-object tracker was developed by Cisco Systems Inc., and the reasoning-learning component utilized is the OpenNARS implementation [3] which implements a Non-Axiomatic Reasoning System [10]. We tested our approach on a publicly available dataset, 'Street Scene', which is a typical scene in the Smart City domain. ...
... As described in [9], this means the system works in Real-Time, is always open to new input, and operates with a constant information processing ability and storage space. An important part is the Non-Axiomatic Logic (see [10] and [11]) which allows the system to deal with uncertainty. To our knowledge our solution is the first to apply NARS for a real-time visual reasoning task. ...
... Knowledge Representation 1 . As a reasoning system, NARS uses a formal language called "Narsese" for knowledge representation, which is defined by a formal grammar given in [10]. To fully specify and explain this language is beyond the scope of this article, so in the following only the directly relevant part is introduced informally and described briefly. ...
Conference Paper
Full-text available
Using a visual scene object tracker and a Non-Axiomatic Reasoning System we demonstrate how to predict and detect various anomaly classes. The approach combines an object tracker with a base ontology and the OpenNARS reasoner to learn to classify scene regions based on accumulating evidence from typical entity class (tracked object) behaviours. The system can autonomously satisfy goals related to anomaly detection and respond to user Q&A in real time. The system learns directly from experience with no initial training required (one-shot). The solution is a fusion of neural techniques (object tracker) and a reasoning system (with ontology).
... NARS (Non-Axiomatic Reasoning System) is an AGI system designed in the framework of a reasoning (or inference) system, where the notion of "reasoning" is used in its broad sense to include many cognitive functions, including learning and perception (Wang, 1995(Wang, , 2006(Wang, , 2013. ...
... Each concept in NARS is named by a term, which is an identifier within the system to address and manipulate concepts. NARS used a term-oriented knowledge representation language, Narsese, which is defined in a formal grammar (Wang, 2013). The language is "formal" in the sense that a symbol in a grammar rule can be instantiated by different terms to name different concepts. ...
... The above experience-grounded semantics (Wang, 2005) not only provides justifications to the logic of NARS, Non-Axiomatic Logic (NAL) (Wang, 2013), but also makes it possible to extend the applicable domain of this logic from abstract concepts to include concepts with sensorimotor associations, as both sensorimotor experience and linguistic experience are involved when determining the meaning of a concept or its identifier, a term. ...
Article
Full-text available
This article discusses an approach to add perception functionality to a general-purpose intelligent system, NARS. Differently from other AI approaches toward perception, our design is based on the following major opinions: (1) Perception primarily depends on the perceiver, and subjective experience is only partially and gradually transformed into objective (intersubjective) descriptions of the environment; (2) Perception is basically a process initiated by the perceiver itself to achieve its goals, and passive receiving of signals only plays a supplementary role; (3) Perception is fundamentally unified with cognition, and the difference between them is mostly quantitative, not qualitative. The directly relevant aspects of NARS are described to show the implications of these opinions in system design, and they are compared with the other approaches. Based on the research results of cognitive science, it is argued that the Narsian approach better fits the need of perception in Artificial General Intelligence (AGI).
... This chapter does not include detailed introduction about Logic and Control mechanism, but only contains part of the content, aiming to help readers understand the following contents of this paper. For a detailed description of NAL and NARS, please read (Wang, 2006), and (Wang, 2013) and the related studies on GitHub. ...
... NARS (Non-Axiomatic Reasoning System) is an AGI designed in the framework of a reasoning system. The project has been described in many publications, including two books (Wang, 2006) (Wang, 2013), Therefore, this chapter only introduces the content that is directly related to the doctoral dissertation. ...
... As a reasoning system, NARS uses a formal language called "Narsese" for knowledge representation, which is defined by a formal grammar given in Wang (2013). ...
Thesis
Full-text available
Functionalist Emotion Model in Artificial General Intelligence by Xiang Li. The objective of this research is to elucidate motivation and emotion processing in an AGI (Artificial General Intelligence) system NARS (Non-Axiomatic Reasoning System). Under the basic assumption that an artificial general intelligence system should work with insufficient resources and knowledge, the emotion module can help direct the selection of internal tasks, and allow the autonomous allocation of internal resources and rapid response with urgency, so that the inference capability of AGI system can be improved. The psychological and AI theories related to emotion are extensively reviewed, including the source of emotion, the appraisal process in emotional experience, the cognitive processing and coping process, and the necessity of emotion for Artificial General Intelligence design. This dissertation describes the conceptual design, realization process and application process of emotion in NARS. The process of internal resource allocation triggered by different emotions based on NARS reasoning framework is proposed, and the design can be applied to any scene. The similarity and difference between human emotion and artificial intelligence emotion are discussed. At the same time, the advantages and disadvantages of the design and its theory are also discussed. A recent implementation of the NARS model will be discussed with examples. and the emotion model has been tested preliminarily in a new version of OpenNARS. New Temporal Induction model, Anticipation model, Goal processing model, and Emotion models which are implemented in the new system will also be discussed in detail. iii The dissertation concludes with suggestions and ideas that are put forward for the role of emotion in future human-computer interaction. iv ACKNOWLEDGEMENTS
... However, its rigid plan-finding procedure and predefined plan library make it unsuitable for the above purposes. 25 The idea we present in this paper is to combine the Jason reasoning cycle with a Non-Axiomatic Reasoning System (NARS) [5,6] to develop multi-agent systems that are able to reason, deliberate and plan when information about plans to be executed and goals to be pursued is missing or incomplete. ...
... Preconditions are associated with each plan and are used to select the best plan, according to the agent's perception. NARS uses Non-Axiomatic Logic (NAL) inference rules, which are well defined in [6] and allow an artificial agent to 50 plan even under the assumption of insufficient knowledge and resources. By combining these approaches, it is possible to increase the performance of the overall system in terms of finding a new strategy that combines these two aspects. ...
... NARS is an Artifial General Intelligence (AGI) [6] System developed in the context of an inferential system. It uses Non-Axiomatic Logic, a term logic that extends Aristotelian logic and its syllogistic forms to include compositions of terms as well as a notion of indeterminacy. ...
Preprint
This work explore the possibility to combine the Jason reasoning cycle with a Non-Axiomatic Reasoning System (NARS) to develop multi-agent systems that are able to reason, deliberate and plan when information about plans to be executed and goals to be pursued is missing or incomplete. The contribution of this work is a method for BDI agents to create high-level plans using an AGI (Artificial General Intelligence) system based on non-axiomatic logic.
... A stream of statements in words, symbols, and digits can never do justice to our rich and many-sided experience of a feeling, emotion, or an idea. Wang (2013) also mentions this in his detailed work on NARS. We can compare this to how our own articulation or verbalization of our experience or even pinning it down to some written form is almost always an inadequate, insufficient and unsatisfactory representation of that experience. ...
... It is pertinent to mention here an interesting admission made by the originator and developer of NARS about emotions and feelings in the system. According to Wang (2013), "It will be more natural to say that the system has different feelings for "objects and things", though accurately speaking, the feelings are about the concepts representing the "objects and things" (p.191). Thus, whether it is the internal 'mental operations' which influence the AI system's self-concept or the actual operation of the 'self' concept in an AI architecture, this 'artificial 'self' is a completely different organism from the 'human self'. ...
... NARS is designed according to the opinion that "intelligence" is the ability for a system to adapt to its environment and to work with insufficient knowledge and resources [13,16]. The reasoning system framework is chosen for its generality and normativeness, as well as flexibility and extendability to cover various cognitive functions, such as planning, learning, categorizing, etc. ...
... Since such knowledge is typically about individual statements or operations, not about the system as a whole, it is not discussed here. For how this kind of knowledge is processed in NARS, see [13,16]. -NARS constantly compares the certainty of beliefs, and dynamically allocates its resources among competing processes. ...
Conference Paper
Full-text available
This paper describes the self-awareness and self-control mechanisms of a general-purpose intelligent system, NARS. The system perceives its internal environment basically in the same way as how it perceives its external environment, though the sensors involved are completely different. NARS uses a “self” concept to organize its relevant beliefs, tasks, and operations. The concept has an innate core, though its content and structure are mostly acquired gradually from the system’s experience. The “self” concept and its ingredients play important roles in the control of the system.
... The idea we present is to merge the Jason reasoning cycle with a Non-Axiomatic Reasoning System (NARS) [8,12]. Both the Jason framework and NARS can be utilized to handle planning. ...
... After the execution of the intention, NARS is informed about the success or the failure of the added plan; it takes note of the result of the action to enhance the success estimate of the plan-related hypothesis. If the new plan 1 NARS uses a fixed-sized memory, so forgetting is inevitable once its memory is at full capacity fails and it has been already converted into an intention, NARS is informed about that and the system checks the truth value (expectation following [12]) to decide whether the plan should be retracted. ...
... NARS is designed to be adaptive and to work with insufficient knowledge and resources. Its various cognitive functions are uniformly carried out by a central reasoning-learning process following a "non-axiomatic" logic (Wang, 2013). ...
... NARS makes use of a formal language, "Narsese", for its knowledge representation, and this language is defined using a formal grammar (Wang, 2013). The system's logic is developed from a so-called term logic. ...
Book
Full-text available
Welcome to SweCog 2018 in Linköping! This booklet contains the program and short papers for oral and poster presentations at SweCog 2018, this year’s edition of the annual conference of the Swedish Cognitive Science Society. Following the SweCog tradition and its aim to support networking among researchers in cognitive science and related areas, contributions cover a wide spectrum of research. A trend in recent years, also reflected in this year’s conference program, is an increas- ing number of contributions that deal with different types of autonomous technologies, such as social robots, virtual agents or automated vehicles, and in particular people’s interaction with such systems. This clearly is a growing research area of high soci- etal relevance, where cognitive science - with its interdisciplinary and human-centered approach - can make significant contributions. We look forward to two exciting days in Linko ̈ping, and we thank the many people who have contributed to the organization of this year’s SweCog conference, in particular of course all authors and reviewers! The organization of SweCog 2018 has been supported by the Faculty of Arts and Sciences, the Department of Culture Communication (IKK), and the Department of Computer Information Science (IDA) at Linko ̈ping University, as well as Cambio Healthcare Systems and Visual Sweden.
... We consider the conjecture that knowledge bootstrapping at birth is a special case of the general principles involved in bootstrapping learning in partiallyunknown, novel, circumstances. In both cases a learner starts with something given that is insufficient for addressing the novelty, facing the cognitive task of making use of what it already has to make sense of the new (Wang 2013). To address this subject we must look at three interlocked co-dependent realms: (a) The world of the learning agent and its target task-environments; (b) the mechanisms for control and management of the cumulative learning process; and (c) how learning is bootstrapped through an existing program-a knowledge seed. ...
... Nivel et al. 2014c, Hammer and Lofthouse 2020, Wang 2007), but especially with respect to realizing all of them in a unified parsimoious way in a single realtime learner. A closely related system that rests on a highly compatible theoretical basis is Wang's non-axiomatic reasoning system (NARS; Wang 2006, Wang 2013. This, as well as relevant work of others, will be referenced throughout the paper where relevant. ...
Conference Paper
Full-text available
The knowledge that a natural learner creates based on its experience of any new situation is likely to be both partial and incorrect. To improve such knowledge with increased experience , cognitive processes must bring already-acquired knowledge towards making sense of new situations and update it with new evidence, cumulatively. For the initial creation of knowledge, and its subsequent usage, expansion, modification, unification, compaction and deletion, cognitive mechanisms must be capable of self-supervised "surgical" operation on existing knowledge, involving among other things self-inspection or reflection, to make possible selective discrimination, comparison, and manipulation of newly demarcated subsets of any relevant part of the whole knowledge set. Few proposals exist for how to achieve this in a single learner. Here we present a theory of how systems with these properties may work, and how cumulative self-supervised learning mechanisms might reach greater levels of autonomy than seen to date. Our theory rests on the hypotheses that learning must be (a) organized around causal relations, (b) bootstrapped from observed correlations and analogy, using (c) fine-grain relational models, manipulated by (d) micro-ampliative reasoning processes. We further hypothesize that a machine properly constructed in this way will be capable of seed-programmed autonomous generality: The ability to apply learning to any phenomenon-that is, being domain-independent-provided that the seed reference observable variables from the outset (at "birth"), and that new phenomena and existing knowledge overlap on one or more observables or inferred features. The theory is based on implemented systems that have produced notable results in the direction of increased general machine intelligence.
... NARS, standing for Non-Axiomatic Reasoning System, is a general-purpose AI project [17,20]. Many of the topics addressed in the following have been described in the previous publications [16,18,19,5], though this paper is the first time when the issue of real-time in NARS is comprehensively discussed. ...
... This strategy can only be used with a logic that is fully compatible with AIKR. The Non-Axiomatic Logic (NAL) used in NARS [20] satisfies this requirement. All inference rules of NAL are "local" in the sense that the conclusion is only generated and justified with a small number of (usually one or two) premises. ...
Chapter
This paper compares the various conceptions of “real-time” in the context of AI, as different ways of taking the processing time into consideration when problems are solved. An architecture of real-time reasoning and learning is introduced, which is one aspect of the AGI system NARS. The basic idea is to form problem-solving processes flexibly and dynamically at run time by using inference rules as building blocks and incrementally self-organizing the system’s beliefs and skills, under the restriction of time requirements of the tasks. NARS is designed under the Assumption of Insufficient Knowledge and Resources, which leads to an inherent ability to deal with varying situations in a timely manner.
... This can be considered a form of adaptation and is usefully paired with executable operations that NARS can perform to autonomously gain a richer understanding of its environment and pursue goals. [9] Similarly to the relevant works, NARS' goals drive the operations it performs. The system engages in common goal generation behaviors like subgoaling but is also unique in that the system explores many potential goal solution paths in parallel and optimizes these solution paths by reasoning on its existing beliefs and new incoming experience. ...
Conference Paper
AGI systems should be able to pursue their many goals autonomously while operating in realistic environments which are complex, dynamic, and often novel. This paper discusses the theory and mechanisms for goal generation and management in Non-Axiomatic Reasoning System (NARS). NARS works to accomplish its goals by performing executable actions while integrating feedback from its experience to build subjective, but useful, predictive and meaningful models. The system’s ever-changing knowledge allows it to adaptively derive new goals from its existing goals. Derived goals not only serve to accomplish their parent goals but also represent independent motivation. The system determines how and when to pursue its many goals based on priority, context, and knowledge acquired from its experience and reasoning capabilities.
... Latapie et al. (2021) proposed such a model inspired by Korzybski's (1994) idea about levels of abstraction. Their model promotes cognitive synergy and metalearning, which refer to the use of different computational techniques and AGI approaches, e.g., probabilistic programming, machine learning/Deep Learning, AERA Thórisson, 2020), NARS 4 (Wang, 2006(Wang, , 2013 to enrich its knowledge and address combinatorial explosion issues. The current article extends the metamodel as a neurosymbolic architecture 5 as in Figure 1. ...
Article
Full-text available
A cognitive architecture aimed at cumulative learning must provide the necessary information and control structures to allow agents to learn incrementally and autonomously from their experience. This involves managing an agent's goals as well as continuously relating sensory information to these in its perception-cognition information processing stack. The more varied the environment of a learning agent is, the more general and flexible must be these mechanisms to handle a wider variety of relevant patterns, tasks, and goal structures. While many researchers agree that information at different levels of abstraction likely differs in its makeup and structure and processing mechanisms, agreement on the particulars of such differences is not generally shared in the research community. A dual processing architecture (often referred to as System-1 and System-2) has been proposed as a model of cognitive processing, and they are often considered as responsible for low- and high-level information, respectively. We posit that cognition is not binary in this way and that knowledge at any level of abstraction involves what we refer to as neurosymbolic information, meaning that data at both high and low levels must contain both symbolic and subsymbolic information. Further, we argue that the main differentiating factor between the processing of high and low levels of data abstraction can be largely attributed to the nature of the involved attention mechanisms. We describe the key arguments behind this view and review relevant evidence from the literature.
... Therefore, it seems likely that AGI will need to integrate many different learning components. Domingos [27] suggests one integration approach and there are other approaches (e.g., NARS [53,54]). There are several issues here: ecosystem conventions, content inference, selection tradeoffs and component patterns. ...
Article
Full-text available
Artificial intelligence (AI) and machine learning promise to make major changes to the relationship of people and organizations with technology and information. However, as with any form of information processing, they are subject to the limitations of information linked to the way in which information evolves in information ecosystems. These limitations are caused by the combinatorial challenges associated with information processing, and by the tradeoffs driven by selection pressures. Analysis of the limitations explains some current difficulties with AI and machine learning and identifies the principles required to resolve the limitations when implementing AI and machine learning in organizations. Applying the same type of analysis to artificial general intelligence (AGI) highlights some key theoretical difficulties and gives some indications about the challenges of resolving them.
... The Non-Axiomatic Reasoning System (NARS) aims to be a general-purpose intelligent system that learns from experience and adapts to unknown environments [Wang, 2013]. It is built from the ground up around the Assumption of Insufficient Knowledge & Resources. ...
Conference Paper
Full-text available
While many evaluation procedures have been proposed in past research for artificial general intelligence (AGI), few take the time to carefully list the (minimum, general) requirements that an AGI-aspiring (cognitive) control architecture is intended to eventually meet. Such requirements could guide the design process and help evaluate the potential of an architecture to become generally intelligent-not through measuring the performance of a running AI system, but through a white-box, offline evaluation of what requirements have been met to what degree. Rather than providing our estimate of what features are necessary to achieve AGI, we analyze a concrete task from the air traffic control (ATC) domain to come up with a crisp set of requirements that AGI would need to meet as well. To avoid major disruptions to proven workflows in safety-critical domains, a trustworthy, robust and adaptable AI system must work side-by-side with a human operator and cumulatively learn new tasks that can gradually be introduced into the operator's complex workflow. Our analysis results in a set of minimal/necessary requirements that can guide the development of AGI-aspiring architectures. We conclude the paper with an evaluation of the degree to which several common AI approaches and archi-tectures meet these requirements.
... Modeled causal relations can be manipulated through the application of ampliative reasoning (cf. Wang 2012) due to the hypothetico-deductive nature embedded in macro-scale causality. Without causality, in fact, not much can be done-committing to a behavior with the aim to achieve a certain outcome for thing Y by manipulating X is not successful when the two are only correlated but not causally related. ...
Conference Paper
Full-text available
The concept of “common sense” (“commonsense”) has had a visible role in the history of artificial intelligence (AI), primarily in the context of reasoning and what’s been referred to as “symbolic knowledge representation.” Much of the research on this topic has claimed to target general knowledge of the kind needed to ‘understand’ the world, stories, complex tasks, and so on. The same cannot be said about the concept of “understanding”; although the term does make an appearance in the discourse in various sub-fields (primarily “language understanding” and “image/scene understanding”), no major schools of thought, theories or undertakings can be discerned for understanding in the same way as for common sense. It’s no surprise, therefore, that the relation between these two concepts is an unclear one. In this review paper we discuss their relationship and examine some of the literature on the topic, as well as the systems built to explore them. We agree with the majority of the authors addressing common sense on its importance for artificial general intelligence. However, we claim that while in principle the phenomena of understanding and common sense manifested in natural intelligence may possibly share a common mechanism, a large majority of efforts to implement common sense in machines has taken an orthogonal approach to understanding proper, with different aims, goals and outcomes from what could be said to be required for an ‘understanding machine.’
... Other kinds of reasoning, however, are also necessary -abduction, deduction, and induction 5 -which means we are really talking about ampliative reasoning 6 [17]. The more domain-independent the cumulative learning is, the more effective and efficient knowledge accumulation can become, and this is where ampliative reasoning enters the picture: Using (a) deduction for prediction, based on learned (hypothesized) principles, (b) abduction for deriving plausible causes, and (c) analogies for adapting acquired knowledge to new situations, multiple lines of reasoning can help the learner exclude certain things while highlighting others, more quickly getting to the crux of how to achieve any task in light of prior experience. ...
Chapter
Full-text available
Autonomous knowledge transfer from a known task to a new one requires discovering task similarities and knowledge generalization without the help of a designer or teacher. How transfer mechanisms in such learning may work is still an open question. Transfer of knowledge makes most sense for learners for whom novelty is regular (other things being equal), as in the physical world. When new information must be unified with existing knowledge over time, a cumulative learning mechanism is required, increasing the breadth, depth, and accuracy of an agent’s knowledge over time, as experience accumulates. Here we address the requirements for what we refer to as autonomous cumulative transfer learning (ACTL) in novel task-environments, including implementation and evaluation criteria, and how it relies on the process of similarity and ampliative reasoning. While the analysis here is theoretical, the fundamental principles of the cumulative learning mechanism in our theory have been implemented and evaluated in a running system described priorly. We present arguments for the theory from an empirical as well as analytical viewpoint.
... It operates under a definition of intelligence that includes a notion of insufficient resources, specifically: "Intelligence" in NARS is defined as the ability for a system to adapt to its environment and to work with insufficient knowledge and resources. Detailed discussions can be found in many publications, including two main books [10,11], describing the full logic. The following will highlight the elements of the logic that are used by this paper to encode the system description of an artifact under diagnosis. ...
Chapter
Full-text available
Symbolic reasoning systems have leveraged propositional logic frameworks to build diagnostics tools capable of describing complex artifacts, while also allowing for a controlled and efficacious search over failure modes. These diagnostic systems represent a complex and varied context in which to explore general intelligence. This paper explores the application of a different reasoning system to such frameworks, specifically, the Non-Axiomatic Reasoning System. It shows how statements can be built describing an artifact, and that NARS is capable of diagnosing abnormal states within examples of said artifact.
... Among the languages that use non-axiomatic knowledge, we can distinguish NARS-ESE [29], NARS [30], ALAS [31,32]. NARSESE and NARS, that are languages used to build a system with learning ability. ...
Article
Full-text available
Artificial intelligence has been developed since the beginning of IT systems. Today there are many AI techniques that are successfully applied. Most of the AI field is, however, concerned with the so-called “narrow AI” demonstrating intelligence only in specialized areas. There is a need to work on general AI solutions that would constitute a framework enabling the integration of already developed narrow solutions and contribute to solving general problems. In this work, we present a new language that potentially can become a base for building intelligent systems of general purpose in the future. This language is called the General Environment Description Language (GEDL). We present the motivation for our research based on the other works in the field. Furthermore, there is an overall description of the idea and basic definitions of elements of the language. We also present an example of the GEDL language usage in the JSON notation. The example shows how to store the knowledge and define the problem to be solved, and the solution to the problem itself. In the end, we present potential fields of application and future work. This article is an introduction to new research in the field of Artificial General Intelligence.
... Therefore, a DNN is often considered to be a "black box". TruePAL's AI system consists of a DNN and a NARS 11,12 . In order to explain the system, we have opened up the DNN to identify specific features in deep learning architecture. ...
Conference Paper
Full-text available
This paper presents the development of an AI assistant, Trusted and Explainable Artificial Intelligence for Saving Lives (TruePAL), to provide real-time warning of risks of potential crashes to the first responders. The TruePAL system employs an AI and deep learning technology for saving first responders and roadside crews lives in and around active traffic. A deep neural network (DNN) and a Non-Axiomatic Reasoning System (NARS) are implemented as an AI system. A mobile app with AI interface is developed to perform verbal communication with the first responders. The TruePAL team has developed an explainable AI approach by opening up the DNN blackbox to extract the activation filters of various features and parts of the targeted objects. The combination of DNN and NARS makes the TruePAL system explainable to the users. TruePAL ingests on-board cameras, radar, and other sensor signals, analyzes the environment and traffic patterns to generate timely warning to drivers and roadside crews to avoid crashes. The TruePAL team, in collaboration with the Miami/Dade Police Dept., has designed five use cases and multiple sub- scenarios in a CARLA driving simulator to test the capability of TruePAL in timely warning to the first responder drivers in potential crash scenarios. We have successfully demonstrated its capability of timely warning in over a dozen scenarios based on the use cases. The preliminary test simulation results show that TruePAL could provide the drivers and crew members advanced warning before a crash occurs.
... Alternative approaches areSowa et al. (2003), who take a conceptual graph approach and treat logical reasoning as a highly stylized form of analogical reasoning, and PeiWang (2013), who presents a non-axiomatic logic founded in part on inheritance. ...
Preprint
Full-text available
The natural and the engineering sciences have long evolved a symbiotic relationship, but the humanities still stand apart. Designing and building a talking robot, however, is a challenge for which all three are needed. Agent-based Database Semantics (DBS) integrates recognition and action interfaces, an on-board orientation system, a content-addressable database, a data structure of nonrecursive feature structures with ordered attributes, and an algorithm of linear complexity.
... Adaptation in the definition refers to "the mechanism for a system to summarize its past experience to predict the future situations accordingly, and to allocate its bounded resources to meet the unbounded demands". In Pei Wang's theory [16], the constraints of insufficient knowledge and resources have been placed at the forefront, though they are obvious in human beings' and machines' lives. ...
Preprint
How to evaluate Artificial General Intelligence (AGI) is a critical problem that is discussed and unsolved for a long period. In the research of narrow AI, this seems not a severe problem, since researchers in that field focus on some specific problems as well as one or some aspects of cognition, and the criteria for evaluation are explicitly defined. By contrast, an AGI agent should solve problems that are never-encountered by both agents and developers. However, once a developer tests and debugs the agent with a problem, the never-encountered problem becomes the encountered problem, as a result, the problem is solved by the developers to some extent, exploiting their experience, rather than the agents. This conflict, as we call the trap of developers' experience, leads to that this kind of problems is probably hard to become an acknowledged criterion. In this paper, we propose an evaluation method named Artificial Open World, aiming to jump out of the trap. The intuition is that most of the experience in the actual world should not be necessary to be applied to the artificial world, and the world should be open in some sense, such that developers are unable to perceive the world and solve problems by themselves before testing, though after that they are allowed to check all the data. The world is generated in a similar way as the actual world, and a general form of problems is proposed. A metric is proposed aiming to quantify the progress of research. This paper describes the conceptual design of the Artificial Open World, though the formalization and the implementation are left to the future.
Article
Although the literature devoted to the naturalization of mainstream phenomenology has been blooming recently, not so many efforts have been made to make the intellectual legacy from Wittgenstein, who could also be viewed as a “linguistic phenomenologist,” accessible to cognitive science. The reluctance of making Wittgenstein naturalized is sometimes backed by the worry that Wittgenstein’s criticism of the notion of “thinking” as some “internal process” is also potentially threatening the computational theory of cognition. But this worry itself is based on some serious misunderstandings of the internal/external dichotomies, the clarifications of which would greatly relieve the tension between Wittgenstein and cognitive science. Moreover, cognitive linguistics could also be viewed as the intermediate theory between Wittgenstein and cognitive science due to the affinities it bears with both Wittgenstein’s later philosophy and cognitive science.
Conference Paper
The aim of this paper is to introduce the design of a novel Distributed Non-Axiomatic Reasoning System. The system is based on Non-Axiomatic Logic, a formalism in the domain of artificial general intelligence designed for realizations of systems with insufficient resources and knowledge. Proposed architecture is based on layered and distributed structure of the backend knowledge base. The design of the knowledge base makes it fault-tolerant and scalable. It promises to allow the system to reason over large knowledge bases with real-time responsiveness.
Book
Full-text available
This book is about changing from the sign-based ontology of language analysis to the agent-based ontology of language communication in particular and cognition in general.
Chapter
Despite 50-plus years’ history of catheter-based cardiovascular intervention (CBCVI), surprisingly little has been learned about the cognitive nature of interventional skills. Thus, learning, teaching, and practice of CBCVI has remained largely based on traditional principles of empiricism associated with mentoring and apprenticeship. In this chapter, a cognitive approach to knowledge transfer in CBCVI is reviewed and discussed.
Chapter
As a tool dealing with information rather than matter, GIS shares with other information technologies the conceptual challenges of its medium. For a number of years now, ontology development has helped harness the complexity of the notion of information and has emerged as an effective means for improving the fitness for use of information products. More recently, the broadening range of users and user needs has led to increasing calls for “lightweight” ontologies very different in structure, expressivity, and scope from the traditional foundational or domain-oriented ones. This paper outlines a conceptual model suitable for generating micro-ontologies of geographic information tailored to specific user needs and purposes, while avoiding the traps of relativism that ad hoc efforts might engender. The model focuses on the notion of information decomposed into three interrelated “views”: that of measurements and formal operations on these, that of semantics that provide the meaning, and that of the context within which the information is interpreted and used. Together, these three aspects enable the construction of micro-ontologies, which correspond to user-motivated selections of measurements to fit particular, task-specific interpretations. The model supersedes the conceptual framework previously proposed by the author (Couclelis, Int J Geogr Inf Sci 24(12):1785–1809, 2010), which now becomes the semantic view. In its new role, the former framework allows informational threads to be traced through a nested sequence of layers of decreasing semantic richness, guided by user purpose. “Purpose” is here seen as both the interface between micro-ontologies and the social world that motivates user needs and perspectives, and as the primary principle in the selection and interpretation of Information most appropriate for the representational task at hand. Thus, the “I” in GIS also stands for the Individual whose need the tool serves.
Chapter
This paper describes Adaptive Neuro-Symbolic Network Agent, a new design of a sensorimotor agent that adapts to its environment by building concepts based on Sparse Distributed Representations of sensorimotor sequences. Utilizing Non-Axiomatic Reasoning System theory, it is able to learn directional correlative links between concept activations that were caused by the appearing of observed and derived event sequences. These directed correlations are encoded as predictive links between concepts, and the system uses them for directed concept-driven activation spreading, prediction, anticipatory control, and decision-making, ultimately allowing the system to operate autonomously, driven by current event and concept activity, while working under the Assumption of Insufficient Knowledge and Resources.
Article
Case-based reasoning heavily depends on the structure and content of the cases, and semantics is essential to effectively represent cases. In the field of structured case representation, most of the works regarding case representation and measurement of semantic similarity between cases are based on model-theoretic semantics and their extensions. The purpose of this study is to explore the potential of experienced-grounded semantics in case representation and semantic similarity measurement. The main contents in this study are as follows: (i) a case representation model based on experience-grounded semantic is proposed, (ii) a novel semantic similarity measurement method with multi-strategy reasoning is introduced, and (iii) a case-based reasoning software for urban firefighting field based on the proposed model is designed and implemented. Theoretically, compared with traditional structured case representation methods, the proposed model not only represents case in a fully formalized way, but also provides a novel metric for computing the strength of the semantic relationship between cases. The proposed model has been applied in an intelligent decision-support software for urban firefighting.
Chapter
This paper discuses attentional control mechanism of several systems in context of Artificial General Intelligence. Attentional control mechanism of OpenNARS, an implementation of Non-Axiomatic Reasoning System for research purposes is being introduced with description of the related functions and demonstration examples. Paper also implicitly compares OpenNARS attentional mechanism with the one found in other Artificial General Intelligence systems.
Chapter
We consider two cognitive architectures which are designed to proceed goal-directed behavior. One of them, Theory of Functional Systems, is developed on the basis of the eponymous biological theory. The other, Non-Axiomatic Reasoning System or NARS, utilizes the formal logic apparatus and weighted probabilistic choice to proceed its inference. The architectures were scrutinized and decomposed and a number of conclusions have been drawn about their advantages and disadvantages regarding the goal-directed behavior processing.
Chapter
An implementation of a Non-Axiomatic Reasoning System-inspired system is presented in this paper. This implementation features a goal system which features deep derivation depths, which allows the system to solve moderately complicated problems. The reasoner is utilizing Non-Axiomatic Logic for procedural and non-procedural reasoning. Most of the internal tasks are done under the Assumption of Insufficient Knowledge and Resources fulfilling various timing and resource constraints.
Conference Paper
In this position paper we propose the approach to use “Thinking-Understanding” architecture for the management of the real-time operated robotic system. Based on the “Robot dream” architecture, the robotic system digital input is been translated in form of “pseudo-spikes” and provided to a simulated spiking neural network, then elaborated and fed back to a robotic system as updated behavioural strategy rules. We present the reasoning rule-based system for intelligent spike processing translating spikes into software actions or hardware signals is thus specified. The reasoning is based on pattern matching mechanisms that activates critics that in their turn activates other critics or ways to think inherited from the work of Marvin Minsky “The emotion machine” [7].
Conference Paper
Full-text available
Many problems in AI study can be traced back to the confusion of different research goals. In this paper, five typical ways to define AI are clarified, analyzed, and compared. It is argued that though they are all legitimate research goals, they lead the research to very different directions, and most of them have trouble to give AI a proper identity. Finally, a working definition of AI is proposed, which has important advantages over the alternatives.
Conference Paper
Full-text available
The integration of artificial intelligence (AI) within cogni-tive science (CogSci) necessitates further elaborations on, and modelings of, several indispensable cognitive criteria. We ap-proach this issue by emphasizing the close relation between ar-tificial general intelligence (AGI) and CogSci, and discussing, particularly, "rationality" as one of such indispensable criteria. We give arguments evincing that normative models of human-like rationality are vital in AGI systems, where the treatment of deviations from traditional rationality models is also nec-essary. After conceptually addressing our rationality-guided approach, two case-study systems, NARS and HDTP, are dis-cussed, explaining how the allegedly "irrational" behaviors can be treated within the respective frameworks.
Conference Paper
Full-text available
We challenge the validity of Dempster-Shafer Theory by using an emblematic example to show that DS rule produces counter-intuitive result. Further analysis reveals that the result comes from a understanding of evidence pooling which goes against the common expectation of this process. Although DS theory has attracted some interest of the scientific community working in information fusion and artificial intelligence, its validity to solve practical problems is problematic, because it is not applicable to evidences combination in general, but only to a certain type situations which still need to be clearly identified.
Article
Full-text available
Personal motivation. The dream of creating artificial devices which reach or outperform human intelligence is an old one. It is also one of the two dreams of my youth, which have never let me go (the other is finding a physical theory of everything). What makes this challenge so interesting? A solution would have enormous implications on our society, and there are reasons to believe that the AI problem can be solved in my expected lifetime. So it’s worth sticking to it for a lifetime, even if it will take 30 years or so to reap the benefits. The AI problem. The science of Artificial Intelligence (AI) may be defined as the construction of intelligent systems and their analysis. A natural definition of a system is anything which has an input and an output stream. Intelligence is more complicated. It can have many faces like creativity, solving problems, pattern recognition, classification, learning, induction, deduction, building analogies, optimization, surviving in an environment, language processing, knowledge, and many more. A formal definition incorporating every aspect of intelligence, however, seems difficult. Most, if not all known facets of intelligence can be formulated as goal
Article
Full-text available
GLAIR (Grounded Layered Architecture with Integrated Reasoning) is a multilayered cognitive architecture for embodied agents operating in real, virtual, or simulated environments containing other agents. The highest layer of the GLAIR Architecture, the Knowledge Layer (KL), contains the beliefs of the agent, and is the layer in which conscious reasoning, planning, and act selection is performed. The lowest layer of the GLAIR Architecture, the Sensori-Actuator Layer (SAL), contains the controllers of the sensors and effectors of the hardware or software robot. Between the KL and the SAL is the Perceptuo-Motor Layer (PML), which grounds the KL symbols in perceptual structures and subconscious actions, contains various registers for providing the agent's sense of situatedness in the environment, and handles translation and communication between the KL and the SAL. The motivation for the development of GLAIR has been "Computational Philosophy", the computational understanding and implementation of human-level intelligent behavior without necessarily being bound by the actual implementation of the human mind. Nevertheless, the approach has been inspired by human psychology and biology.
Article
Full-text available
A set of hypotheses is formulated for a connectionist approach to cognitive modeling. These hypotheses are shown to be incompatible with the hypotheses underlying traditional cognitive models. The connectionist models considered are massively parallel numerical computational systems that are a kind of continuous dynamical system. The numerical variables in the system correspond semantically to fine-grained features below the level of the concepts consciously used to describe the task domain. The level of analysis is intermediate between those of symbolic cognitive models and neural models. The explanations of behavior provided are like those traditional in the physical sciences, unlike the explanations provided by symbolic models. Higher-level analyses of these connectionist models reveal subtle relations to symbolic models. Parallel connectionist memory and linguistic processes are hypothesized to give rise to processes that are describable at a higher level as sequential rule application. At the lower level, computation has the character of massively parallel satisfaction of soft numerical constraints; at the higher level, this can lead to competence characterizable by hard rules. Performance will typically deviate from this competence since behavior is achieved not by interpreting hard rules but by satisfying soft constraints. The result is a picture in which traditional and connectionist theoretical constructs collaborate intimately to provide an understanding of cognition.
Article
Full-text available
This collection is devoted to the analysis and application of abductive and inductive reasoning in a common context, studying their relation and possible ways for integration. There are several reasons for doing so. One reason is practical, and based on the expectation that abduction and induction are sufficiently similar to allow for a tight integration in practical systems, yet sufficiently complementary for this integration to be useful and productive.
Chapter
Full-text available
This chapter presents a model of classical conditioning called the temporaldifference (TD) model. The TD model was originally developed as a neuronlike unit for use in adaptive networks (Sutton and Barto 1987; Sutton 1984; Barto, Sutton and Anderson 1983). In this paper, however, we analyze it from the point of view of animal learning theory. Our intended audience is both animal learning researchers interested in computational theories of behavior and machine learning researchers interested in how their learning algorithms relate to, and may be constrained by, animal learning studies. For an exposition of the TD model from an engineering point of view, see Chapter 13 of this volume. We focus on what we see as the primary theoretical contribution to animal learning theory of the TD and related models: the hypothesis that reinforcement in classical conditioning is the time derivative of a composite association combining innate (US) and acquired (CS) associations. We call models based on some variant of this hypothesis time-derivative models, examples of which are the models by Klopf (1988), Sutton and Barto
Article
Full-text available
Implementing and fleshing out a number of psychological and neuroscience theories of cognition, the LIDA conceptual model aims at being a cognitive “theory of everything.” With modules or processes for perception, working memory, episodic memories, “consciousness,” procedural memory, action selection, perceptual learning, episodic learning, deliberation, volition, and non-routine problem solving, the LIDA model is ideally suited to provide a working ontology that would allow for the discussion, design, and comparison of AGI systems. The LIDA architecture is based on the LIDA cognitive cycle, a sort of “cognitive atom.” The more elementary cognitive modules and processes play a role in each cognitive cycle. Higher-level processes are performed over multiple cycles. In addition to giving a quick overview of the LIDA conceptual model, and its underlying computational technology, we argue for the LIDA architecture's role as a foundational architecture for an AGI. Finally, lessons For AGI researchers drawn from the model and its architecture are discussed.
Article
Full-text available
This introductory chapter sets the stage for the book as a whole. First, the purpose of the conference that led to the book is reviewed. Then, the notion of “Artificial General Intelligence” (AGI) is clarified, including a brief survey of the past and present situation of the field, an analysis and refutation of some common objections and doubts regarding the AGI area of research, and a discussion of what needs to be addressed by the field as a whole in the near future. Finally, there is a summary of the contents of the other chapters in the book.
Chapter
This chapter is a reprint of Frank P. Ramsey’s seminal paper “Truth and Probability” written in 1926 and first published posthumous in the 1931 The Foundations of Mathematics and other Logical Essays, ed. R.B. Braithwaite, London: Routledge & Kegan Paul Ltd. The paper lays the foundations for the modern theory of subjective probability. Ramsey argues that degrees of beliefs may be measured by the acceptability of odds on bets, and provides a set of decision theoretic axioms, which jointly imply the laws of probability.
Chapter
This paper extends earlier work by its authors on formal aspects of the processes of contracting a theory to eliminate a proposition and revising a theory to introduce a proposition. In the course of the earlier work, Gardenfors developed general postulates of a more or less equational nature for such processes, whilst Alchourron and Makinson studied the particular case of contraction functions that are maximal, in the sense of yielding a maximal subset of the theory (or alternatively, of one of its axiomatic bases), that fails to imply the proposition being eliminated. In the present paper, the authors study a broader class, including contraction functions that may be less than maximal. Specifically, they investigate “partial meet contraction functions”, which are defined to yield the intersection of some nonempty family of maximal subsets of the theory that fail to imply the proposition being eliminated. Basic properties of these functions are established: it is shown in particular that they satisfy the Gardenfors postulates, and moreover that they are sufficiently general to provide a representation theorem for those postulates. Some special classes of partial meet contraction functions, notably those that are “relational” and “transitively relational”, are studied in detail, and their connections with certain “supplementary postulates” of Gardenfors investigated, with a further representation theorem established.
Article
Modern studies of Aristotle's logical works have been deeply informed by modern logical theory. Beginning with Lukasiewicz and Scholz, historians of logic have used the methods of symbolic logic to interpret Aristotle's logical theories, and with rich results. George Grote's picture of Aristotle's logic, and indeed his picture of logic itself, is heavily influenced by a separate issue going back at least to Francis Bacon. From a modern perspective, Aristotle's most significant achievement as a logician is his theory of valid inference, usually known today as the syllogistic . Grote proceeds largely by way of paraphrase, in the tradition of ancient commentators like Sophonias. Although Grote does not pretend to be offering an edition of Aristotle's logical works, he is thoroughly familiar with ancient and modern commentators and sometimes engages with them on philological points. Keywords: Aristotle's logic; Francis Bacon; George Grote; Sophonias
Article
Many decisions are based on beliefs concerning the likelihood of uncertain events such as the outcome of an election, the guilt of a defendant, or the future value of the dollar. Occasionally, beliefs concerning uncertain events are expressed in numerical form as odds or subjective probabilities. In general, the heuristics are quite useful, but sometimes they lead to severe and systematic errors. The subjective assessment of probability resembles the subjective assessment of physical quantities such as distance or size. These judgments are all based on data of limited validity, which are processed according to heuristic rules. However, the reliance on this rule leads to systematic errors in the estimation of distance. This chapter describes three heuristics that are employed in making judgments under uncertainty. The first is representativeness, which is usually employed when people are asked to judge the probability that an object or event belongs to a class or event. The second is the availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development, and the third is adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available.
Book
Commit it then to the flames: for it can contain nothing but sophistry and illusion.' Thus ends David Hume's Enquiry concerning Human Understanding, the definitive statement of the greatest philosopher in the English language. His arguments in support of reasoning from experience, and against the 'sophistry and illusion' of religiously inspired philosophical fantasies, caused controversy in the eighteenth century and are strikingly relevant today, when faith and science continue to clash. The Enquiry considers the origin and processes of human thought, reaching the stark conclusion that we can have no ultimate understanding of the physical world, or indeed our own minds. In either sphere we must depend on instinctive learning from experience, recognizing our animal nature and the limits of reason. Hume's calm and open-minded scepticism thus aims to provide a new basis for science, liberating us from the 'superstition' of false metaphysics and religion. His Enquiry remains one of the best introductions to the study of philosophy, and this edition places it in its historical and philosophical context.
Article
While most of the current works in Artificial Intelligence (AI) focus on individual aspects of intelligence and cognition, the project described in this book, Non-Axiomatic Reasoning System (NARS), is designed and developed to attack the AI problem as a whole. This project is based on the belief that what we call "intelligence" can be understood and reproduced as "the capability of a system to adapt to its environment while working with insufficient knowledge and resources". According to this idea, a novel reasoning system is designed, which challenges all the dominating theories in how such a system should be built. The system carries out reasoning, learning, categorizing, planning, decision making, etc., as different facets of the same underlying process. This theory also provides unified solutions to many problems in AI, logic, psychology, and philosophy. This book is the most comprehensive description of this decades-long project, including its philosophical foundation, methodological consideration, conceptual design details, its implications in the related fields, as well as its similarities and differences to many related works in cognitive sciences.
Article
Intelligence can be understood as a form of rationality, in the sense that an intelligent system does its best when its knowledge and resources are insufficient with respect to the problems to be solved. The traditional models of rationality typically assume some form of sufficiency of knowledge and resources, so cannot solve many theoretical and practical problems in Artificial Intelligence (AI). New models based on the Assumption of Insufficient Knowledge and Resources (AIKR) cannot be obtained by minor revisions or extensions of the traditional models, and have to be established fully according to the restrictions and freedoms provided by AIKR. The practice of NARS, an AI project, shows that such new models are feasible and promising in providing a new theoretical foundation for the study of rationality, intelligence, consciousness, and mind.
Book
This book describes Probabilistic Logic Networks (PLN), a novel conceptual, mathematical and computational approach to uncertain inference. Going beyond prior probabilistic approaches to uncertain inference, PLN encompasses such ideas as induction, abduction, analogy, fuzziness and speculation, and reasoning about time and causality. The book provides an overview of PLN in the context of other approaches to uncertain inference. Topics addressed in the text include: •the basic formalism of PLN knowledge representation •the conceptual interpretation of the terms used in PLN •an indefinite probability approach to quantifying uncertainty, providing a general method for calculating the "weight-of-evidence" underlying the conclusions of uncertain inference •specific PLN inference rules and the corresponding truth-value formulas used to determine the strength of the conclusion of an inference rule from the strengths of the premises •large-scale inference strategies •inference using variables •indefinite probabilities involving quantifiers •inheritance based on properties or patterns •the Novamente Cognition Engine, an application of PLN •temporal and causal logic in PLN Researchers and graduate students in artificial intelligence, computer science, mathematics and cognitive sciences will find this novel perspective on uncertain inference a thought-provoking integration of ideas from a variety of other lines of inquiry. © 2009 Springer Science+Business Media, LLC. All rights reserved.
Article
This paper considers both software development and computer system use from the viewpoint of the human effort involved. It attempts to identify various factors contributing to the successful development and use of computer programs and systems. For example, ...
Article
As many philosophers agree, the frame problem is concerned with how an agent may efficiently filter out irrelevant information in the process of problem-solving. Hence, how to solve this problem hinges on how to properly handle semantic relevance in cognitive modeling, which is an area of cognitive science that deals with simulating human’s cognitive processes in a computerized model. By “semantic relevance”, we mean certain inferential relations among acquired beliefs which may facilitate information retrieval and practical reasoning under certain epistemic constraints, e.g., the insufficiency of knowledge, the limitation of time budget, etc. However, traditional approaches to relevance—as for example, relevance logic, the Bayesian approach, as well as Description Logic—have failed to do justice to the foregoing constraints, and in this sense, they are not proper tools for solving the frame problem/relevance problem. As we will argue in this paper, Non-Axiomatic Reasoning System (NARS) can handle the frame problem in a more proper manner, because the resulting solution seriously takes epistemic constraints on cognition as a fundamental theoretical principle.
Article
The more common interpretations of is-a links are cataloged, and some differences between systems that on the surface appear very similar are pointed out. A rational reconstruction of the is-a relation is developed, and suggestions are made regarding how the next generation of knowledge-representation languages should be structured. In particular, the analysis indicates that meanings might be a lot clearer if is-a were broken down into its semantic subcomponents and those subcomponents then used as the primitives of a representation system. 9 references.
Article
In Part I, four ostensibly different theoretical models of induction are presented, in which the problem dealt with is the extrapolation of a very long sequence of symbols—presumably containing all of the information to be used in the induction. Almost all, if not all problems in induction can be put in this form. Some strong heuristic arguments have been obtained for the equivalence of the last three models. One of these models is equivalent to a Bayes formulation, in which a priori probabilities are assigned to sequences of symbols on the basis of the lengths of inputs to a universal Turing machine that are required to produce the sequence of interest as output. Though it seems likely, it is not certain whether the first of the four models is equivalent to the other three. Few rigorous results are presented. Informal investigations are made of the properties of these models. There are discussions of their consistency and meaningfulness, of their degree of independence of the exact nature of the Turing machine used, and of the accuracy of their predictions in comparison to those of other induction methods. In Part II these models are applied to the solution of three problems—prediction of the Bernoulli sequence, extrapolation of a certain kind of Markov chain, and the use of phrase structure grammars for induction. Though some approximations are used, the first of these problems is treated most rigorously. The result is Laplace's rule of succession. The solution to the second problem uses less certain approximations, but the properties of the solution that are discussed, are fairly independent of these approximations. The third application, using phrase structure grammars, is least exact of the three. First a formal solution is presented. Though it appears to have certain deficiencies, it is hoped that presentation of this admittedly inadequate model will suggest acceptable improvements in it. This formal solution is then applied in an approximate way to the determination of the “optimum” phrase structure grammar for a given set of strings. The results that are obtained are plausible, but subject to the uncertainties of the approximation used.
Article
The paper is concerned with the theoretical underpinnings for semantic network representations. It is concerned specifically with understanding the semantics of the semantic network structures themselves, i.e., with what the notations and structures used in a semantic network can mean, and with interpretations of what these links mean that will be logically adequate to the job of representing knowledge. It focuses on several issues: the meaning of 'semantics', the need for explicit understanding of the intended meanings for various types of arcs and links, the need for careful thought in choosing conventions for representing facts as assemblages of arcs and nodes, and several specific difficult problems in knowledge representation - especially problems of relative clauses and quantification.
Article
An abstract is not available.
Article
An abstract is not available.