Article

Is Logic in the Mind or in the World? Why a Philosophical Question can Affect the Understanding of Intelligence

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Dreyfus' call ‘to make artificial intelligence (AI) more Heideggerian‘ echoes Heidegger's affirmation that pure calculations produce no ‘intelligence’ (Dreyfus, 2007). But what exactly is it that AI needs more than mathematics? The question in the title gives rise to a reexamination of the basic principles of cognition in Husserl's Phenomenology. Using Husserl's Phenomenological Method, a formalization of these principles is presented that provides the principal idea of cognition, and as a consequence, a ‘natural logic’. Only in a second step, mathematics is obtained from this natural logic by abstraction. The limitations of pure reasoning are demonstrated for fundamental considerations (Hilbert's ‘finite Einstellung’) as well as for the task of solving practical problems. Principles will be presented for the design of general intelligent systems, which make use of a natural logic.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Conference Paper
Problem solving and problem understanding are interwoven. Often at the beginning of a project, engineers' knowledge is insufficient with regard to the posed problems. The correct specification of a task may depend on its solution, which is why the conventional sequence of specification and solution cannot be maintained. Methods are needed to deal with 'imprecisely posed problems', by which the solution process will be adapted to the essential problem structures only. We present procedures to detect these essential structures and to acquire problem solving skills from the solution process itself.
Article
Zusammenfassung Der Beitrag beschreibt die Konzeptentwicklung und das Auffinden von Koordinationsregeln für komplexe verkoppelte Systeme. Am Beispiel dieser Aufgabenstellung wird die Bedeutung der Computational Intelligence für die Systemtheorie erläutert.
Article
Full-text available
Throughout its long history, mathematics has involved the use of systems of written signs, most notably, diagrams in Euclidean geometry and formulae in the symbolic language of arithmetic and algebra in the mathematics of Descartes, Euler, and others. Such systems of signs enable one to embody chains of mathematical reasoning. The author shows that, properly understood, Frege’s Begriffsschrift or concept-script similarly enables one to write mathematical reasoning. Much as a demonstration in Euclid or in early modern algebra does, a proof in Frege’s concept-script shows how it goes.
Article
Full-text available
NARS is an AGI project developed in the framework of reasoning system, and it adapts to its environment with insufficient knowledge and resources. The development of NARS takes an incremental approach, by extending the formal model stage by stage. The system, when finished, can be further augmented in several directions.
Article
Full-text available
Perhaps the simplest and the most basic qualitative law of probability is the conjunction rule: The probability of a conjunction, P(A&B), cannot exceed the probabilities of its constituents, P(A) and P(B), because the extension (or the possibility set) of the conjunction is included in the extension of its constituents. Judgments under uncertainty, however, are often mediated by intuitive heuristics that are not bound by the conjunction rule. A conjunction can be more representative that one of its constituents, and instances of a specific category can be easier to imagine or to retrieve than instances of a more inclusive category. The representativeness and availability heuristics therefore can make a conjunction appear more probable than one of its constituents. This phenomenon is demonstrated in a variety of contexts, including estimation of word frequency, personality judgment, medical prognosis, decision under risk, suspicion of criminal acts, and political forecasting. Systematic violations of the conjunction rule are observed in judgments of lay people and of experts in both between- and within-Ss comparisons. Alternative interpretations of the conjunction fallacy are discussed, and attempts to combat it are explored. (48 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Formalization of Evidence: A Comparative Study This article analyzes and compares several approaches of formalizing the notion of evidence in the context of general-purpose reasoning system. In each of these approaches, the notion of evidence is defined, and the evidence-based degree of belief is represented by a binary value, a number (such as a probability), or two numbers (such as an interval). The binary approaches provide simple ways to represent conclusive evidence, but cannot properly handle inconclusive evidence. The one-number approaches naturally represent inconclusive evidence as a degree of belief, but lack the information needed to revise this degree. It is argued that for systems opening to new evidence, each belief should at least have two numbers attached to indicate its evidential support. A few such approaches are discussed, including the approach used in NARS, which is designed according to the considerations of general-purpose intelligent systems, and provides novel solutions to several traditional problems on evidence.
Article
Full-text available
Throughout his career, Husserl identifies naturalism as the greatest threat to both the sciences and philosophy. In this paper, I explicate Husserl’s overall diagnosis and critique of naturalism and then examine the specific transcendental aspect of his critique. Husserl agreed with the Neo-Kantians in rejecting naturalism. He has three major critiques of naturalism: First, it (like psychologism and for the same reasons) is ‘countersensical’ in that it denies the very ideal laws that it needs for its own justification. Second, naturalism essentially misconstrues consciousness by treating it as a part of the world. Third, naturalism is the inevitable consequence of a certain rigidification of the ‘natural attitude’ into what Husserl calls the ‘naturalistic attitude’. This naturalistic attitude ‘reifies’ and it ‘absolutizes’ the world such that it is treated as taken-for-granted and ‘obvious’. Husserl’s transcendental phenomenological analysis, however, discloses that the natural attitude is, despite its omnipresence in everyday life, not primary, but in fact is relative to the ‘absolute’ transcendental attitude. The mature Husserl’s critique of naturalism is therefore based on his acceptance of the absolute priority of the transcendental attitude. The paradox remains that we must start from and, in a sense, return to the natural attitude, while, at the same time, restricting this attitude through the on-going transcendental vigilance of the universal epoché.
Article
Full-text available
The paper presents an outline of a unified answer to five questions concerning logic: (1) Is logic in the mind or in the world? (2) Does logic need a foundation? What is the main obstacle to a foundation for logic? Can it be overcome? (3) How does logic work? What does logical form represent? Are logical constants referential? (4) Is there a criterion of logicality? (5) What is the relation between logic and mathematics?
Article
Full-text available
This paper describes a computationally feasible approximation to the AIXI agent, a universal reinforcement learning agent for arbitrary environments. AIXI is scaled down in two key ways: First, the class of environment models is restricted to all prediction suffix trees of a fixed maximum depth. This allows a Bayesian mixture of environment models to be computed in time proportional to the logarithm of the size of the model class. Secondly, the finite-horizon expectimax search is approximated by an asymptotically convergent Monte Carlo Tree Search technique. This scaled down AIXI agent is empirically shown to be effective on a wide class of toy problem domains, ranging from simple fully observable games to small POMDPs. We explore the limits of this approximate agent and propose a general heuristic framework for scaling this technique to much larger problems. Comment: 42 LaTeX pages, 20 figures
Article
Full-text available
In the discussions on the limitation of Artificial Intelligence (AI), there are three major misconceptions, which identify an AI system with an ax- iomatic system, a Turing machine, and a system with a model-theoretic se- mantics, respectively. Though these three notions can be used to describe a computer system for certain purposes, they are not always the proper theoretical notions when an AI system is under consideration. These misconceptions are not only the basis of many criticisms of AI from the outside, but also responsible for many problems within AI research. This paper analyzes these misconceptions, and points out the common root of them, that is, to treat empirical reasoning as mathematical reasoning. Fi- nally, an intelligent system, NARS, is introduced as an example, which is neither an axiomatic system, nor a Turing machine in its problem-solving process, and does not use a model-theoretic semantics, though is still im- plementable in an ordinary computer.
Article
Full-text available
Most classification algorithms used in high energy physics fall under the category of supervised machine learning. Such methods require a training set containing both signal and background events and are prone to classification errors should this training data be systematically inaccurate for example due to the assumed MC model. To complement such model-dependent searches, we propose an algorithm based on semi-supervised anomaly detection techniques, which does not require a MC training sample for the signal data. We first model the background using a multivariate Gaussian mixture model. We then search for deviations from this model by fitting to the observations a mixture of the background model and a number of additional Gaussians. This allows us to perform pattern recognition of any anomalous excess over the background. We show by a comparison to neural network classifiers that such an approach is a lot more robust against misspecification of the signal MC than supervised classification. In cases where there is an unexpected signal, a neural network might fail to correctly identify it, while anomaly detection does not suffer from such a limitation. On the other hand, when there are no systematic errors in the training data, both methods perform comparably.
Article
Full-text available
General-purpose, intelligent, learning agents cycle through sequences of observations, actions, and rewards that are complex, uncertain, unknown, and non-Markovian. On the other hand, reinforcement learning is well-developed for small finite state Markov decision processes (MDPs). Up to now, extracting the right state representations out of bare observations, that is, reducing the general agent setup to the MDP framework, is an art that involves significant effort by designers. The primary goal of this work is to automate the reduction process and thereby significantly expand the scope of many existing reinforcement learning algorithms and the agents that employ them. Before we can think of mechanizing this search for suitable MDPs, we need a formal objective criterion. The main contribution of this article is to develop such a criterion. I also integrate the various parts into one learning algorithm. Extensions to more realistic dynamic Bayesian networks are developed in Part II. The role of POMDPs is also considered there. Comment: 24 LaTeX pages, 5 diagrams
Article
Full-text available
The Relational Blockworld (RBW) interpretation of non-relativistic quantum mechanics (NRQM) is introduced. Accordingly, the spacetime of NRQM is a relational, nonseparable blockworld whereby spatial distance is only defined between interacting transtemporal objects. RBW is shown to provide a novel statistical interpretation of the wavefunction that deflates the measurement problem, as well as a geometric account of quantum entanglement and non-separability that satisfies locality per special relativity and is free of interpretative mystery. We present RBW’s acausal and adynamical resolution of the so-called “quantum liar paradox,” an experimental set-up alleged to be problematic for a spacetime conception of reality, and conclude by speculating on RBW’s implications for quantum gravity
Article
Full-text available
In discussions about whether the Principle of the Identity of Indiscernibles is compatible with structuralist ontologies of mathematics, it is usually assumed that individual objects are subject to criteria of identity which somehow account for the identity of the individuals. Much of this debate concerns structures that admit of non-trivial automorphisms. We consider cases from graph theory that violate even weak formulations of PII. We argue that (i) the identity or difference of places in a structure is not to be accounted for by anything other than the structure itself and that (ii) mathematical practice provides evidence for this view.
Article
Full-text available
Specialized intelligent systems can be found everywhere: finger print, handwriting, speech, and face recognition, spam filtering, chess and other game programs, robots, et al. This decade the first presumably complete mathematical theory of artificial intelligence based on universal induction-prediction-decision-action has been proposed. This informationtheoretic approach solidifies the foundations of inductive inference and artificial intelligence. Getting the foundations right usually marks a significant progress and maturing of a field. The theory provides a gold standard and guidance for researchers working on intelligent algorithms. The roots of universal induction have been laid exactly half-a-century ago and the roots of universal intelligence exactly one decade ago. So it is timely to take stock of what has been achieved and what remains to be done. Since there are already good recent surveys, I describe the state-of-the-art only in passing and refer the reader to the literature. This article concentrates on the open problems in universal induction and its extension to universal intelligence.
Article
Full-text available
A fundamental problem in artificial intelligence is that nobody really knows what intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: We take a number of well known informal definitions of human intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of intelligence for arbitrary machines. We believe that this equation formally captures the concept of machine intelligence in the broadest reasonable sense. We then show how this formal definition is related to the theory of universal optimal learning agents. Finally, we survey the many other tests and definitions of intelligence that have been proposed for machines.
Article
The paper discusses Husserl's phenomenology of mathematics in his Formal and Transcendental Logic (1929). In it Husserl seeks to provide descriptive foundations for mathematics. As sciences and mathematics are normative activities Husserl's attempt is also to describe the norms at work in these disciplines. The description shows that mathematics can be given in several different ways. The phenomenologist's task is to examine whether a given part of mathematics is genuine according to the norms that pertain to the approach in question. The paper will then examine the intuitionistic, formalistic, and structural features of Husserl's philosophy of mathematics.
Article
The subject of the present work is noema and its structure in various stages of the objectivating process. Despite its great importance, this issue has never been adequately explained, neither by Husserl nor by his followers. The main objective is to provide the theory that would describe the structure of noema and its function without simplifying the case or appealing to non-phenomenological data. This has been achieved by way of analysis divided into four sections. The first provides an overview of noema. The second section is devoted to analysis of the process of objectivation, i.e., how an active awareness of the object in a logical sense is constituted by a series of passive experiences. The third section refers to a noema as found at different stages of objectivation. It explains how the increasing level of activity, which turns out to be a noetic function, causes changes to the structure of a noema. The last section summarises the results and stresses the advantages of the developed theory in comparison with other interpretations, especially those offered by Drummond, Smith and McIntyre.
Article
In dem vorliegenden Beitrag werden vor dem Hintergrund neuester Erkenntnisse aus der Linguistik, der Neurolinguistik und den Neurowissenschaften allgemein einige Gedanken zur Verwendung der Sprache in Psychoanalyse und Psychotherapie ausgeführt. Das Fazit ist, dass die Berücksichtigung der subsymbolischen/symbolischen sprachlichen Entwicklung des Kindes und der affektiven Abstimmung zwischen Kind und Umfeld in der psychotherapeutischen Beziehung von grundlegender Bedeutung ist.
Book
In the beginning of the present treatise we discussed the observation that many scientists consider Newton’s classical physics as understandable and intuitive, whereas Modern Physics of the twentieth century is judged as difficult to grasp and unintuitive. This assessment is shared by many physicists and presumably by the majority of philosophers of science. Here, we did not investigate the question why physicists as well as philosophers accept these statements as true – simply since we consider both theses as erroneous.
Article
Charles Darwin claimed that the forms and the behaviour of living beings can be explained from their will to survive. But what are the consequences of this idea for humans knowledge, their theories of nature and their mathematics?. We discuss the view that even Plato's objective world of mathematical objects does not exist absolutely, without the intentions of mathematicians. Using Husserl's Phenomenological Method, cognition can be understood as a process by which meaning is deduced from empirical data relative to intentions. Thereby the essential structure of any cognition process can be detected and this structure is mirrored in logic. A natural logic becomes the direct result of cognition. Only in a second step, mathematics is obtained by abstraction from natural logic. In this way mathematics gains a well-defined foundation and is no longer part of a dubious 'a-priori knowledge' (Kant). This access to mathematics offers a new look on many old problems, e.g. the Petersburg problem and the problem 'P = NP?'. We demonstrate that this new justification of mathematics has also important applications in Artificial Intelligence. Our method provides a procedure to construct an adequate logic to solve most efficiently the problems of a given problem class. Thus, heuristics can be tailor-made for the necessities of applications.
Article
This article develops a critical investigation of the epistemological core of Hilbert’s foundational project, the so-called the finitary attitude. The investigation proceeds by distinguishing different senses of ‘number’ and ‘finitude’ that have been used in the philosophical arguments. The usual notion of modern pure mathematics, i.e. the sense of number which is implicit in the notion of an arbitrary finite sequence and iteration is one sense of number and finitude. Another sense, of older origin, is connected with practices of counting concrete things, and a third sense is linked up with the immediate intuitive experience of multitudes of concrete things. Hilbert’s finitism is examined with respect to these differences, and it will be shown that there is a tendency to conflate the different senses of number and finitude, a tendency which has been a source of problems in the discussion of the foundations of mathematics and in the philosophy of logic and language.
Article
This study evaluated the contribution of implementation intentions over the constructs of the theory of planned behaviour in testing a confirmatory model explaining the relationships between antecedents of academic achievement for university students. It was found that the effects of intention to perform and its classic antecedents on exams performance were mediated by implementation intentions with a considerable increase of amount of explained variance above the contribution of the constructs of planned behaviour theory. This paper is organized as follows. We first introduce the construct of intention, particularly in the context of the well documented model of the theory of planned behaviour (TPB). The limitation of this model will be explained and a basic distinction it does not consider will be pointed to: forming an intention vs. implementing it. Then we present the Heckhausen and Gollwitzer’s Action Phases Model (APM) and the construct of implementation intention. The results of experimental and correlational researches on the predictive power of this construct, especially in the educational domain, will be discussed. Thirdly, we propose an integrative model which fills the gap between TPB and performance by introducing implementation intentions from unwanted inner states. Fourthly, we give the results of an experimental research aiming in testing this model conducted in an educational settings and using a confirmatory design (Structural Equation Modelling). In the end, we insist on the implication of these results for students.
Article
A new realist interpretation of quantum mechanics is introduced. Quantum systems are shown to have two kinds of properties: the usual ones described by values of quantum observables, which are called extrinsic, and those that can be attributed to individual quantum systems without violating standard quantum mechanics, which are called intrinsic. The intrinsic properties are classified into structural and conditional. Asystematic and self-consistent account is given. Much more statements become meaningful than any version of Copenhagen interpretation would allow. Anew approach to classical properties and measurement problem is suggested. Aquantum definition of classical states is proposed.
Article
If Husserl is correct, phenomenological inquiry produces knowledge with an extremely high level of epistemic warrant or justification. However, there are several good reasons to think that we are highly fallible at carrying out phenomenological inquiries. It is extremely difficult to engage in phenomenological investigations, and there are very few substantive phenomenological claims that command a widespread consensus. In what follows, I introduce a distinction between method-fallibility and agent-fallibility, and use it to argue that the fact that we are fallible phenomenologists does not undermine Husserl’s claims concerning the epistemic value of phenomenological inquiry. I will also defend my account against both internalist and externalist objections.
Article
We analyze seemingly contradictory claims in the literature about the role played by decoherence in ensuring classical behavior for the chaotically tumbling satellite Hyperion. We show that the controversy is resolved once the very different assumptions underlying these claims are recognized. In doing so, we emphasize the distinct notions of the problem of classicality in the ensemble interpretation of quantum mechanics and in decoherence-based approaches that are aimed at addressing the measurement problem.
Chapter
This is an introduction to the theory of decoherence with an emphasis on its microscopic origins and on a dynamic description. The text corresponds to a chapter soon to be published in: A. Buchleitner, C. Viviescas, and M. Tiersch (Eds.), Entanglement and Decoherence. Foundations and Modern Trends, Lecture Notes in Physics, Vol 768, Springer, Berlin (2009)
Article
John McDowell rejects the idea that non-conceptual content can rationally justify empirical claims—a task for which it is ill-fitted by its non-conceptual nature. This paper considers three possible objections to his views: he cannot distinguish empty conception from the perceptual experience of an object; perceptual discrimination outstrips the capacity of concepts to keep pace; and experience of the empirical world is more extensive than the conceptual focusing within it. While endorsing McDowell’s rejection of what he means by non-conceptual content, and appreciating his insight into the experiential synthesis of intuition and conception (in particular, its role in grasping objects), I will argue that Edmund Husserl presents an even more comprehensive account of perceptual experience that explains how we experience the contribution of receptivity and sensibility and how they cooperate in perceptual discrimination. Further, it reveals “horizons”—a unique kind of contents, surplus content (rather than independent non-conceptual content)—beyond the synthesis of intuitive and conceptual contents through which objects are grasped. Such horizons play a constitutive role, making experience with its conceptual dimensions and justificatory potential possible; they in no way function like a bare given that is to fulfill some independent justificatory role. Whereas McDowell focuses on how experience does not take place in isolation from the exercise of conceptual capacities, Husserl complements his view by situating experience in a more encompassing whole and by elucidating the surplus-horizons that exceed the conceptual content of experience; play an inseparable, constitutive role within it; and indicate the limits of conceptual comprehension.
Article
Prolegomenon means something said in advance of something else. In this study, we posit that part of the work by Arthur Schopenhauer (1788–1860) can be thought of as a prolegomenon to the existing concept of fuzziness. His epistemic framework offers a comprehensive and surprisingly modern framework to study individual decision making and suggests a bridgeway from the Kantian program into the concept of fuzziness, which may have had its second prolegomenon in the work by Frege, Russell, Wittgenstein, Peirce and Black. In this context, Zadeh's seminal contribution can be regarded as the logical consequence of the Kant-Schopenhauer representation framework.
Article
In this paper I address some related aspects of Merleau-Ponty’s unfinished texts, The Visible and the Invisible and The Prose of the World. The point of departure for my reading of these works is the sense of philosophical disillusionment which underlies and motivates them, and which, I argue, leads Merleau-Ponty towards an engagement with art in general and with literature in particular. I suggest that Merleau-Ponty’s emerging conception of ethics—premised on the paradox of a “universal singularity” and concerned with the concrete experience of the individual subject, rather than with abstractions and formal categories—can best be articulated through the formalist concept of “defamiliarization,” the fundamental performativity of all literature, and the dialogic relations which, though inherent in all discourse, become most powerfully evident in the dynamics of reading.
Article
The embodied and situated approach to artificial intelligence (AI) has matured and become a viable alternative to traditional computationalist approaches with respect to the practical goal of building artificial agents, which can behave in a robust and flexible manner under changing real-world conditions. Nevertheless, some concerns have recently been raised with regard to the sufficiency of current embodied AI for advancing our scientific understanding of intentional agency. While from an engineering or computer science perspective this limitation might not be relevant, it is of course highly relevant for AI researchers striving to build accurate models of natural cognition. We argue that the biological foundations of enactive cognitive science can provide the conceptual tools that are needed to diagnose more clearly the shortcomings of current embodied AI. In particular, taking an enactive perspective points to the need for AI to take seriously the organismic roots of autonomous agency and sense-making. We identify two necessary systemic requirements, namely constitutive autonomy and adaptivity, which lead us to introduce two design principles of enactive AI. It is argued that the development of such enactive AI poses a significant challenge to current methodologies. However, it also provides a promising way of eventually overcoming the current limitations of embodied AI, especially in terms of providing fuller models of natural embodied cognition. Finally, some practical implications and examples of the two design principles of enactive AI are also discussed.
Article
A general class of aggregation operators called MICA having the properties of monotonicity, symmetry and an identity element are introduced. We stress the significance of the choice of identity in characterizing the operator. We show that the t-norm and t-conorm are special cases of these operators. Other classees of these operators are introduced, notably an additive class which is very much in the spirit of the kind of aggregation used in neural networks. It is shown how a general description of the fuzzy system modeling technique can be obtained using these operators.
Article
In this paper, we develop the idea of a universal anytime intelligence test. The meaning of the terms “universal” and “anytime” is manifold here: the test should be able to measure the intelligence of any biological or artificial system that exists at this time or in the future. It should also be able to evaluate both inept and brilliant systems (any intelligence level) as well as very slow to very fast systems (any time scale). Also, the test may be interrupted at any time, producing an approximation to the intelligence score, in such a way that the more time is left for the test, the better the assessment will be. In order to do this, our test proposal is based on previous works on the measurement of machine intelligence based on Kolmogorov complexity and universal distributions, which were developed in the late 1990s (C-tests and compression-enhanced Turing tests). It is also based on the more recent idea of measuring intelligence through dynamic/interactive tests held against a universal distribution of environments. We discuss some of these tests and highlight their limitations since we want to construct a test that is both general and practical. Consequently, we introduce many new ideas that develop early “compression tests” and the more recent definition of “universal intelligence” in order to design new “universal intelligence tests”, where a feasible implementation has been a design requirement. One of these tests is the “anytime intelligence test”, which adapts to the examinee's level of intelligence in order to obtain an intelligence score within a limited time.
Article
Constructing plans that can handle multiple problem instances is a longstanding open problem in AI. We present a framework for generalized planning that captures the notion of algorithm-like plans and unifies various approaches developed for addressing this problem. Using this framework, and building on the TVLA system for static analysis of programs, we develop a novel approach for computing generalizations of classical plans by identifying sequences of actions that will make measurable progress when placed in a loop. In a wide class of problems that we characterize formally in the paper, these methods allow us to find generalized plans with loops for solving problem instances of unbounded sizes and also to determine the correctness and applicability of the computed generalized plans. We demonstrate the scope and scalability of the proposed approach on a wide range of planning problems.
Article
A nonaxiomatic reasoning system is an adaptive system that works with insufficient knowledge and resources. At the beginning of the paper, three binary term logics are defined. The first is based only on an inheritance relation. The second and the third suggest a novel way to process extension and intension, and they also have interesting relations with Aristotle's syllogistic logic. Based on the three simple systems, a nonaxiomatic logic is defined. It has a term-oriented language and an experience-grounded semantics. It can uniformly represent and process randomness, fuzziness, and ignorance. It can also uniformly carry out deduction, abduction, induction, and revision.
Article
A model for the process of knowledge acquisition is presented that shows how naive realism emerges from a quantum mechanical background. We formalise this process of emergence and obtain in this way an illustrative insight to some of the most fundamental physical theories: GRW-theory and E∞-theory.
Article
Computational models fail to shed light on general metaphysical questions concerning the nature of emergence. At the same time, they may provide plausible explanations of particular cases of emergence. This paper outlines the kinds of modest explanations to which computational models are suited.
Article
Causality and belief change play an important role in many applications. This paper focuses on the main issues of causality and interventions in possibilistic graphical models. We show that interventions, which are very useful for representing causal relations between events, can be naturally viewed as a belief change process. In particular, interventions can be handled using a possibilistic counterpart of Jeffrey's rule of conditioning under uncertain inputs. This paper also addresses new issues that are arisen in the revision of graphical models when handling interventions. We first argue that the order in which observations and interventions are introduced is very important. Then we show that in order to correctly handle sequences of observations and interventions, one needs to change the structure of possibilistic networks. Lastly, an efficient procedure for revising possibilistic causal trees is provided.
Article
Hubert L. Dreyfus, used Heidegger as a guide for the whole AI (artificial intelligence) program at MIT. He introduced Heidegger's non-representational account of the absorption of Dasein (human being) in the world. He also explained that Heidegger distinguished two modes of being, the readiness-to-hand of equipment when we are involved in using it, and the presence-at-hand of objects when we contemplate them. In his 1925 course, Logic: The Question of Truth, Heidegger describes the most basic experience of what he later calls 'pressing into possibilities' not as dealing with the desk, the door, the lamp, the chair and so forth, but as directly responding to a 'what for'. According to Heidegger, every act of having things in front of oneself and perceiving them is held within the disclosure of those things, a disclosure that things get from a primary meaningfulnesss in terms of the what-for.
Article
Th is paper investigates decision-theoretic planning in sophisticated autonomous agents operating in environments of rea l-world complexity. An example might be a planetary rover exploring a largely unknown planet. It is argued th a t existing algorithms for decision-theoretic planning are based on a logically incorrect theory of rational decision making. Plans cannot be evaluated directly in terms of their expected values, because plans can be of different scopes, and they can interact with other previously adopted plans. Furthermore, in the real world, the search for optima l plans is completely intractable. An alternative theory of rational decision making is proposed, called "loca lly globa l planning".
Article
Im vorliegenden Aufsatz wird im ersten Abschnitt auf die grundlegende Vorgehensweise zur Systemidentifikation eingegangen, um deren Probleme aufzuzeigen. Im zweiten Abschnitt wird das algorithmische Verfahren zur glaubensbasierten Identifikation hergeleitet. Dabei soll insbesondere der Nachweis geführt werden, dass hier im Gegensatz zu Identifikationsverfahren, die auf der Wahrscheinlichkeitstheorie basieren, alle Freiheitsgrade mittels Anwenderanforderungen eindeutig festgelegt werden können. Im dritten Abschnitt wird das Verfahren zur Prognose des Haltbarkeitsendes der Werkzeuge beim Gewindebohren eingesetzt.
Article
We present an application of the analytical inductive programming system Igor to learning sets of recursive rules from positive experience. We propose that this approach can be used within cognitive architectures to model regularity detection and generalization learning. Induced recursive rule sets represent the knowledge which can produce systematic and productive behavior in complex situations – that is, control knowledge for chaining actions in different, but structural similar situations. We argue, that an analytical approach which is governed by regularity detection in example experience is more plausible than generate-and-test approaches. After introducing analytical inductive programming with Igor we will give a variety of example applications from different problem solving domains. Furthermore, we demonstrate that the same generalization mechanism can be applied to rule acquisition for reasoning and natural language processing.
Chapter
Many philosophers these days consider themselves naturalists, but it's doubtful any two of them intend the same position by the term. This book describes and practices a particularly austere form of naturalism called 'Second Philosophy'. Without a definitive criterion for what counts as 'science' and what doesn't, Second Philosophy can't be specified directly - 'trust only the methods of science!' or some such thing - so the book proceeds instead by illustrating the behaviors of an idealized inquirer called here the 'Second Philosopher'. This Second Philosopher begins from perceptual common sense and progresses from there to systematic observation, active experimentation, theory formation, and testing, working all the while to assess, correct, and improve methods along the way. 'Second Philosophy' is then the result of the Second Philosopher's investigations. This book delineates the Second Philosopher's approach by tracing reactions to various familiar sceptical and transcendental views (Descartes, Kant, Carnap, late Putnam, van Fraassen), comparing methods to those of other self-described naturalists (especially Quine), and examining a prominent contemporary debate (between disquotationalists and correspondence theorists in the theory of truth) to extract a properly second-philosophical line of thought. The book then undertakes to practice Second Philosophy in its reflections on the ground of logical truth, the methodology, ontology, and epistemology of mathematics, and the general prospects for metaphysics naturalized.
Article
I present a case study of scientific discovery, where building two functional and behavioral approximations of neurons, one physical and the other computational, led to conceptual and implementation breakthroughs in a neural engineering laboratory. Such building of external systems that mimic target phenomena, and the use of these external systems to generate novel concepts and control structures, is a standard strategy in the new engineering sciences. I develop a model of the cognitive mechanism that connects such built external systems with internal models, and I examine how new discoveries, and consensus on discoveries, could arise from this external-internal coupling and the building process. The model is based on the emerging framework of common coding, which proposes a shared representation in the brain between the execution, perception, and imagination of movement.
Article
Ginzburg-Landau (GL) equations for the coexistent states of superconductivity and magnetism are derived microscopically from the extended Hubbard model with on-site repulsive and nearest-neighbor attractive interactions. In the derived GL free energy a cubic term that couples the spin-singlet and spin-triplet components of superconducting order parameters (SCOP) with magnetization exists. This term gives rise to a spin-triplet SCOP near the interface between a spin-singlet superconductor and a ferromagnet, consistent with previous theoretical studies based on the Bogoliubov de Gennes method and the quasiclassical Green's function theory. In coexistent states of singlet superconductivity and antiferromagnetism it leads to the occurrence of pi-triplet SCOPs.
Article
This paper examines contemporary attempts to explicate the explanatory role of mathematics in the physical sciences. Most such approaches involve developing so-called mapping accounts of the relationships between the physical world and mathematical structures. The paper argues that the use of idealizations in physical theorizing poses serious difficulties for such mapping accounts. A new approach to the applicability of mathematics is proposed. • Introduction • Mathematical Explanations I: Entities • Mathematical Explanations II: Operations • Mapping Accounts: Strengths • Mapping Accounts: Idealizations • 5.1Pincock and matching models • 5.2Bueno, Colyvan, and the inferential conception • Mapping Accounts: Limitations • Suggestions for a New Approach • Conclusion
Article
The life and accomplishments of Grete Hermann are described. During the early twentieth century, she worked in physics, mathematics, philosophy and education. Her most notable accomplishments in physics were in the interpretation of quantum theory.
Article
This special issue describes important recent developments in applying reinforcement learning models to capture neural and cognitive function. But reinforcement learning, as a theoretical framework, can apply at two very different levels of description: mechanistic and rational. Reinforcement learning is often viewed in mechanistic terms--as describing the operation of aspects of an agent's cognitive and neural machinery. Yet it can also be viewed as a rational level of description, specifically, as describing a class of methods for learning from experience, using minimal background knowledge. This paper considers how rational and mechanistic perspectives differ, and what types of evidence distinguish between them. Reinforcement learning research in the cognitive and brain sciences is often implicitly committed to the mechanistic interpretation. Here the opposite view is put forward: that accounts of reinforcement learning should apply at the rational level, unless there is strong evidence for a mechanistic interpretation. Implications of this viewpoint for reinforcement-based theories in the cognitive and brain sciences are discussed.
Article
The conceptual and dynamical aspects of decoherence are analyzed, while their consequences are discussed for several fundamental applications. This mechanism, which is based on a universal Schr\"odinger equation, is furthermore compared with the phenomenological description of open systems in terms of `quantum dynamical maps'.