Article

Artificial Intelligence: Connectionist and Symbolic Approaches

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

In this article, the two competing paradigms of articial intelligence, connectionist and symbolic approaches, will be described. It is pointed out that no single existing paradigm can fully handle all the major AI problems. Each paradigm has its strengths and weaknesses. This situation indicates the need to integrate these two existing paradigms. Perhaps the most signicant feature of current articial intelligence research is the co-existence of a number of vastly dierent and often seriously conicting paradigms, competing for the attention of the research community (as well as research funding). In this article, two competing paradigms of articial intelligence, the connectionist and the symbolic approach, will be described. Brief analysis and criticism of each paradigm will be provided, and possible integration of the two will also be discussed as a result of the analysis on their respective shortcomings. 1 1 The Two Paradigms The two main competing paradigms in articial ...

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Convolutional filters, recurrent hidden states, and nonlinearities on operations parameterized by millions of weights are well-suited for transforming continuous, high-dimensional, and distributed representations of discrete entities. In literature, this distributed approach to knowledge is known as connectionist AI [1]. Its primary advantage is that having the output of the model be a differentiable function of its parameters means that gradient based optimization is possible. ...
... The original approach completely avoided continuous representations because during the founding of the discipline in the 1960's, the computational world was far more constrained. Instead, early program synthesis researchers focused on symbolic AI [1]. The fundamental building block for this paradigm was the notion of a symbol, which when combined, form expressions. ...
Article
In recent years, deep learning has made tremendous progress in a number of fields that were previously out of reach for artificial intelligence. The successes in these problems has led researchers to consider the possibilities for intelligent systems to tackle a problem that humans have only recently themselves considered: program synthesis. This challenge is unlike others such as object recognition and speech translation, since its abstract nature and demand for rigor make it difficult even for human minds to attempt. While it is still far from being solved or even competitive with most existing methods, neural program synthesis is a rapidly growing discipline which holds great promise if completely realized. In this paper, we start with exploring the problem statement and challenges of program synthesis. Then, we examine the fascinating evolution of program induction models, along with how they have succeeded, failed and been reimagined since. Finally, we conclude with a contrastive look at program synthesis and future research recommendations for the field.
... Information processing is at the core of most complex real-world applications. As articulated by Sun in [1], no single problem-solving methodology is able to cope with all aspects of such applications. Thus, hybrid systems, which aim to combine the strengths of individual techniques, have been receiving increased attention. ...
... In the majority of the cases, computers are programmed to process information in either a dominant bottom-up, or a dominant top-down fashion. Despite their power, top-down approaches have pitfalls when applied to real-world scenarios, especially in terms of their sensitivity to noise, limited learning capability [1] and symbol-grounding [2]. On the other hand, bottom-up information processing tends to be limited in terms of the complexity of operations it can perform. ...
... An important idea linked to this approach is the search space [5]. It is supposed that in any problem there is a space of states, defined by an initial state, a set of actions that can be done and a transition model that defines the consequences of the actions. ...
... The poor results obtained with symbol manipulation models, especially in their inability to handle flexible and robust processing in an efficient manner, lead in the 1980 to the connectionist paradigm [5]. It does not deny that at some levels human beings manipulate symbols, but suggest that this manipulation is not implied in cognition; it tries to model the source of all this unconscious skills and instinct as an interconnected network of simple and almost uniform units [2]. ...
Thesis
In the last few years, due to the new Deep Learning techniques, artificial neural networks have completely revolutionized the technologic landscape, demonstrating themselves effective in many tasks of Artificial Intelligence and similar research fields. Therefore it could be interesting to analyse how and by what measure deep networks can replace symbolic AI systems. After the impressive results obtained in the game of Go, the game of Nine Men’s Morris has been chosen as case of study in this work, because it is a widely spread and deeply studied board game. Therefore, the Neural Nine Men’s Morris system has been created, a completely sub-symbolic program which uses three deep networks to choose the best move for the game. Networks have been trained over a dataset of more than 1,500,000 pairs (game state, best move), created according to the choices of a symbolic AI system. The tests have demonstrated that the system has learnt the rules of the game, predicting a legal move in more than 99% of the cases. Moreover, it has reached an accuracy on the dataset of 39% and has developed its own game strategy, which results to be different from its trainer one, proving itself to be a better or a worse player according to its adversary. Results achieved in this case study show that the key issue in designing state-of-the-art AI systems in this context seems to be a good balance between symbolic and sub-symbolic techniques, giving more relevance to the latter, with the aim to reach a perfect integration of these technologies.
... One of the most significant features of today's Artificial Intelligence (AI) is the existence of two competing paradigms, the symbolic approach and the connectionist approach [1]. In this work, we explore Artificial Neural Networks (ANNs) through the prism of connectionist approach. ...
Preprint
Full-text available
Motivated by graph theory, artificial neural networks (ANNs) are traditionally structured as layers of neurons (nodes), which learn useful information by the passage of data through interconnections (edges). In the machine learning realm, graph structures (i.e., neurons and connections) of ANNs have recently been explored using various graph-theoretic measures linked to their predictive performance. On the other hand, in network science (NetSci), certain graph measures including entropy and curvature are known to provide insight into the robustness and fragility of real-world networks. In this work, we use these graph measures to explore the robustness of various ANNs to adversarial attacks. To this end, we (1) explore the design space of inter-layer and intra-layers connectivity regimes of ANNs in the graph domain and record their predictive performance after training under different types of adversarial attacks, (2) use graph representations for both inter-layer and intra-layers connectivity regimes to calculate various graph-theoretic measures, including curvature and entropy, and (3) analyze the relationship between these graph measures and the adversarial performance of ANNs. We show that curvature and entropy, while operating in the graph domain, can quantify the robustness of ANNs without having to train these ANNs. Our results suggest that the real-world networks, including brain networks, financial networks, and social networks may provide important clues to the neural architecture search for robust ANNs. We propose a search strategy that efficiently finds robust ANNs amongst a set of well-performing ANNs without having a need to train all of these ANNs.
... [4]. There are several works that attempt to explain the different types of Hybrid Systems [5], [6]. There exist a set of criteria that allow classify the NSHS. ...
Article
Full-text available
In the industrial sector there are many processes where the visual inspection is essential, the automation of that processes becomes a necessity to guarantee the quality of several objects. In this paper we propose a methodology for textile quality inspection based on the texture cue of an image. To solve this, we use a Neuro-Symbolic Hybrid System (NSHS) that allow us to combine an artificial neural network and the symbolic representation of the expert knowledge. The artificial neural network uses the CasCor learning algorithm and we use production rules to represent the symbolic knowledge. The features used for inspection has the advantage of being tolerant to rotation and scale changes. We compare the results with those obtained from an automatic computer vision task, and we conclude that results obtained using the proposed methodology are better.
... [3]. There are several works that attempt to explain the different types of Hybrid Systems [4], [5]. There exist a set of criteria that allow classify the NSHS. ...
Article
Full-text available
The Neuro-Symbolic Hybrid Systems (NSHS) are used to solve problems where there exists a necessity of combining and integrating the artificial neural networks and the symbolic representations in only one system in order to obtain better results. We developed a NSHS Methodology to integrate the knowledge of a human expert and the numeric knowledge obtained from a computer vision process. We implement the methodology to solve a quality inspection problem in artificial textures. The construction of neurosymbolic integration strategies allows us defining an adequate type of neuro-symbolic system to obtain an increment of the efficiency of an inspection task, which is shown with the better results obtained compared with other approaches.
... modeling knowledge representations, while the latter is more focused on capturing the learning process. This disparity has lead to the development of hybrid connectionistsymbolic models [46]. Examples of well-known cognitive architectures that fall into this category include ACT-R [47], Soar [48], EPIC [49], and CLARION [50]. ...
Article
This article focuses on the design of systems in which a human operator is responsible for overseeing autonomous agents and providing feedback based on sensor data. In the control systems community, the term human supervisory control (or simply supervisory control) is often used as a shorthand reference for systems with this type of architecture [5]-[7]. In a typical human supervisory control application, the operator does not directly manipulate autonomous agents but rather indirectly interacts with these components via a central data-processing station. As such, system designers have the opportunity to easily incorporate automated functionalities to control how information is presented to the operator and how the input provided by the operator is used by automated systems. The goal of these functionalities is to take advantage of the inherent robustness and adaptability of human operators, while mitigating adverse effects such as unpredictability and performance variability. In some contexts, to meet the goal of single-operator supervision of multiple automated sensor systems, such facilitating mechanisms are not only useful but necessary for practical use [8], [9]. A successful system design must carefully consider the goals of each part of the system as a whole and seamlessly stitch components together using facilitating functionalities.
... This is problematic because engineering design concerns itself with understanding how design choices affect the performance of the system. This approach has worked well for perceptual tasks such as classification and recognition of patterns, however, this is a small piece of the cognition and there is still difficulty in learning complex representations necessary for high-level cognitive func- tions [18]. Connectionist models have the potential to match and even enhance the capabilities of pure symbol systems, but these developments are likely still far away. ...
Article
Full-text available
Cognitive engineering is a multi-disciplinary field and hence it is difficult to find a review article consolidating the leading developments in the field. The in-credible pace at which technology is advancing pushes the boundaries of what is achievable in cognitive engineering. There are also differing approaches to cognitive engineering brought about from the multi-disciplinary nature of the field and the vastness of possible applications. Thus research communities require more frequent reviews to keep up to date with the latest trends. In this paper we shall dis-cuss some of the approaches to cognitive engineering holistically to clarify the reasoning behind the different approaches and to highlight their strengths and weaknesses. We shall then show how developments from seemingly disjointed views could be integrated to achieve the same goal of creating cognitive machines. By reviewing the major contributions in the different fields and showing the potential for a combined approach, this work intends to assist the research community in devising more unified methods and techniques for developing cognitive machines.
... According to Kant (2018), given the connectionist nature of neural network approaches (Sun, 2000), neural-based approaches are not suitable to generate code properly when operating at character level. Using a variation of a recurrent neural network, Neelakantan et al. (2015) proposed a model to interpret questions over spreadsheets that require the application of built-in functions. ...
Thesis
Programming is a key skill in a world where businesses are driven by digital transformations. Although many of the programming demand can be addressed by a simple set of instructions composing libraries and services available in the web, non-technical professionals, such as domain experts and analysts, are still unable to construct their own programs due to the intrinsic complexity of coding. Among other types of end-user development, natural language programming has emerged to allow users to program without the formalism of traditional programming languages, where a tailored semantic parser can translate a natural language utterance to a formal command representation able to be processed by a computational machine. Currently, semantic parsers are typically built on the top of a learning method that defines its behaviours based on the patterns behind a large training data, whose production frequently are costly and time-consuming. Our research is devoted to study and propose a semantic parser for natural language commands targeting a scenario with low availability of training data. Our proposed semantic parser follows a multi-component architecture, composed of a specialised shallow parser that associates natural language commands to predicate-argument structures, integrated to a distributional ranking model that matches the command to a function signature available from an API knowledge base. Systems developed with statistical learning models and complex linguistics resources, as the proposed semantic parser, do not provide natively an easy way to associate a single feature from the input data to the impact in system behaviour. In this scenario, end-user explanations for intelligent systems has become a strong requirement to increase user confidence and system literacy. Thus, our research designed an explanation model for the proposed semantic parser that fits the heterogeneity of its multi-component architecture. The explanation model explores a hierarchical representation in an increasing degree of technical depth, providing higher-level explanations in the initial layers, going gradually to those that demand technical knowledge, applying different explanation strategies to better express the approach behind each component. With the support of a user-centred experiment, we compared the utility of different types of explanations and the impact of background knowledge in their preferences.
... Notably, two main families of approaches emerged-the symbolic and the connectionist or sub-symbolic ones [151,152]. While the former focuses on representing the world through symbols-which in turn, represent concepts-thus emulating how the human mind reasons and infers, the latter aim at mimicking human intuition by emulating how the human brain works at a very low level. ...
Article
Full-text available
Together with the disruptive development of modern sub-symbolic approaches to artificial intelligence (AI), symbolic approaches to classical AI are re-gaining momentum, as more and more researchers exploit their potential to make AI more comprehensible, explainable, and therefore trustworthy. Since logic-based approaches lay at the core of symbolic AI, summarizing their state of the art is of paramount importance now more than ever, in order to identify trends, benefits, key features, gaps, and limitations of the techniques proposed so far, as well as to identify promising research perspectives. Along this line, this paper provides an overview of logic-based approaches and technologies by sketching their evolution and pointing out their main application areas. Future perspectives for exploitation of logic-based technologies are discussed as well, in order to identify those research fields that deserve more attention, considering the areas that already exploit logic-based approaches as well as those that are more likely to adopt logic-based approaches in the future.
... Under this perspective, several methods have been deve-Chapter 2. Rule-Based systems, BRMS and decision automation loped for building knowledge-based systems capable of making complex automated decisions. The main ones, according to Sun (1999) and Darlington (2013), fall into two categories : ...
Thesis
The concept of “Business Rule Management System” (BRMS) has been introduced in order to facilitate the design, the management and the execution of company-specific business policies. Based on a symbolic approach, the main idea behind these tools is to enable the business users to manage the business rule changes in the system without requiring programming skills. It is therefore a question of providing them with tools that enable to formulate their business policies in a near natural language form and automate their processing. Nowadays, with the expansion of intelligent systems, we have to cope with more and more complex decision logic and large volumes of data. It is not straightforward to identify the causes leading to a decision. There is a growing need to justify and optimize automated decisions in a short time frame, which motivates the integration of advanced explanatory component into its systems. Thus, the main challenge of this research is to provide an industrializable approach for explaining the decision-making processes of business rules applications and more broadly rule-based systems. This approach should be able to provide the necessary information for enabling a general understanding of the decision, to serve as a justification for internal and external entities as well as to enable the improvement of existing rule engines. To this end, the focus will be on the generation of the explanations in themselves as well as on the manner and the form in which they will be delivered.
... 153 Instead of expressing intelligence through a formal set of symbols and expressions or developing a library of knowledge, sub-symbolic AI is inspired by the biological properties of neurons. 154 Those that favor this approach contend that the evolutionary process for producing evermore complex organic brain structures is a template to be followed by AI. 155 Early work in this field established the idea of creating an artificial network of neurons (initially named perceptrons) that do not encode information into any formal language. 156 Instead, data is processed through three kinds of neurons or units (input, hidden, and output) where each connection is distinguished by having a positive (that excites) or a negative (that inhibits) weight. ...
... symbolic-subsymbolic systems). Numerous researchers have argued the advantages of symbolic-subsymbolic systems (Kelly, 2003; Simen & Polk, 2010; Sun, 2001; A. Wilson & Hendler, 1993). The basic motivation for combining both approaches is straightforward. ...
Article
This dissertation explores the implications of computational cognitive modeling for information retrieval. The parallel between information retrieval and human memory is that the goal of an information retrieval system is to find the set of documents most relevant to the query whereas the goal for the human memory system is to access the relevance of items stored in memory given a memory probe (Steyvers & Griffiths, 2010). The two major topics of this dissertation are desirability and information scent. Desirability is the context independent probability of an item receiving attention (Recker & Pitkow, 1996). Desirability has been widely utilized in numerous experiments to model the probability that a given memory item would be retrieved (Anderson, 2007). Information scent is a context dependent measure defined as the utility of an information item (Pirolli & Card, 1996b). Information scent has been widely utilized to predict the memory item that would be retrieved given a probe (Anderson, 2007) and to predict the browsing behavior of humans (Pirolli & Card, 1996b). In this dissertation, I proposed the theory that desirability observed in human memory is caused by preferential attachment in networks. Additionally, I showed that documents accessed in large repositories mirror the observed statistical properties in human memory and that these properties can be used to improve document ranking. Finally, I showed that the combination of information scent and desirability improves document ranking over existing well-established approaches.
Conference Paper
Cognitive engineering is a multi-disciplinary field and hence it is difficult to find a review article consolidating the leading developments in the field. The incredible pace at which technology is advancing pushes the boundaries of what is achievable in cognitive engineering. There are also differing approaches to cognitive engineering brought about from the multi-disciplinary nature of the field and the vastness of possible applications. Thus research communities require more frequent reviews to keep up to date with the latest trends. In this paper we shall discuss some of the approaches to cognitive engineering holistically to clarify the reasoning behind the different approaches and to highlight their strengths and weaknesses. We shall then show how developments from seemingly disjointed views could be integrated to achieve the same goal of creating cognitive machines. By reviewing the major contributions in the different fields and showing the potential for a combined approach, this work intends to assist the research community in devising more unified methods and techniques for developing cognitive machines.
Article
Human awareness under different circumstances is complex and non-trivial to understand. Nevertheless, due to the importance of awareness for safety and efficiency in many domains (e.g., the aviation domain), it is necessary to study the processes behind situation awareness, to eliminate possible errors in action selection that may lead to disasters. Interesting models for situation awareness have been presented, mainly from an ecological psychology perspective, but they are debatable with respect to the latest neurocognitive evidences. With the developments in brain imaging and recording techniques, more and more detailed information on complex cognitive processes becomes available. This provides room to further investigate the mechanisms behind many cognitive phenomena, including situation awareness. This paper presents a computational cognitive agent model for situation awareness from the perspective of action selection, which is inspired by neurocognitive evidences. The model integrates bottom-up and top-down cognitive processes, related to various cognitive states: perception, desires, attention, intention, (prior and retrospective) awareness, ownership, feeling, and communication. Based on the model, various cognitive effects can be explained, such as perceptual load, predictive processes, inferential processes, cognitive controlling, unconscious bias, and conscious bias. A model like this will be useful in domains that benefit from complex simulations of socio-technical systems (e.g. the aviation domain) based on computational models of human behaviour. In such domains, existing agent-based simulations are limited, since most of the agent models do not include realistic nature-inspired processes. The validity of the model is illustrated based on simulations for the aviation domain, focusing on a particular situation where an agent has biased perception, poor comprehension, habitual driven projection, and conflict between prior and retrospective effects on action execution.
Article
Full-text available
The necessity of creating intelligent processing entities has become an important field of study for some researchers. Most of the time, it is difficult to decide what type of knowledge representation to use in order to find the best way to solve a problem. In the literature, there is always a competition to demonstrate that one type of knowledge representation is better than the other. Over the years, researchers have observed the weaknesses of each representation and the natural complementary properties of each knowledge type. Therefore, scientists have begun to design new systems by integrating different types of knowledge into a single system, called "hybrid." Hybrid systems are complex systems built by a collection of two or more simple systems.
Article
Full-text available
One problem pertaining to Intensive Care Unit information systems is that, in some cases, a very dense display of data can result. To ensure the overview and readability of the increasing volumes of data, some special features are required (e.g., data prioritization, clustering, and selection mechanisms) with the application of analytical methods (e.g., temporal data abstraction, principal component analysis, and detection of events). This paper addresses the problem of improving the integration of the visual and analytical methods applied to medical monitoring systems. We present a knowledge- and machine learning-based approach to support the knowledge discovery process with appropriate analytical and visual methods. Its potential benefit to the development of user interfaces for intelligent monitors that can assist with the detection and explanation of new, potentially threatening medical events. The proposed hybrid reasoning architecture provides an interactive graphical user interface to adjust the parameters of the analytical methods based on the users' task at hand. The action sequences performed on the graphical user interface by the user are consolidated in a dynamic knowledge base with specific hybrid reasoning that integrates symbolic and connectionist approaches. These sequences of expert knowledge acquisition can be very efficient for making easier knowledge emergence during a similar experience and positively impact the monitoring of critical situations. The provided graphical user interface incorporating a user-centered visual analysis is exploited to facilitate the natural and effective representation of clinical information for patient care.
Chapter
Full-text available
The research described in this chapter is concerned with investigating the combination of knowledge discovery in database and intelligent computing technologies, in developing a framework for intelligent decision support systems (IDSS). In this context, the chapter presents an approach for IDSS through the combination of data mining (DM) technology with artificial neural networks (NN) in a hybrid architecture called the DM-NN model. This research draws from the concepts of computational intelligence, knowledge discovery in databases and decision support.
Article
Full-text available
A set of hypotheses is formulated for a connectionist approach to cognitive modeling. These hypotheses are shown to be incompatible with the hypotheses underlying traditional cognitive models. The connectionist models considered are massively parallel numerical computational systems that are a kind of continuous dynamical system. The numerical variables in the system correspond semantically to fine-grained features below the level of the concepts consciously used to describe the task domain. The level of analysis is intermediate between those of symbolic cognitive models and neural models. The explanations of behavior provided are like those traditional in the physical sciences, unlike the explanations provided by symbolic models. Higher-level analyses of these connectionist models reveal subtle relations to symbolic models. Parallel connectionist memory and linguistic processes are hypothesized to give rise to processes that are describable at a higher level as sequential rule application. At the lower level, computation has the character of massively parallel satisfaction of soft numerical constraints; at the higher level, this can lead to competence characterizable by hard rules. Performance will typically deviate from this competence since behavior is achieved not by interpreting hard rules but by satisfying soft constraints. The result is a picture in which traditional and connectionist theoretical constructs collaborate intimately to provide an understanding of cognition.
Article
Full-text available
: Human agents draw a variety of inferences effortlessly, spontaneously, and with remarkable efficiency --- as though these inferences are a reflex response of their cognitive apparatus. Furthermore, these inferences are drawn with reference to a large body of background knowledge. This remarkable human ability seems paradoxical given the results about the complexity of reasoning reported by researchers in artificial intelligence. It also poses a challenge for cognitive science and computational neuroscience: How can a system of simple and slow neuron-like elements represent a large body of systematic knowledge and perform a range of inferences with such speed? We describe a computational model that is a step toward addressing the cognitive science challenge and resolving the artificial intelligence paradox. We show how a connectionist network can encode millions of facts and rules involving n-ary predicates and variables, and perform a class of inferences in a few hundred ...
Article
Connectionist models have had problems representing and applying general knowledge rules that specifically require variables. This variable binding problem has barred them from performing the high-level inferencing necessary for planning, reasoning, and natural language understanding. This paper describes ROBIN, a structured neural network model capable of high-level inferencing requiring variable bindings and rule application. Variable bindings are handled by signatures—activation patterns which uniquely identify the concept bound to a role. Signatures allow multiple role-bindings to be propagated across the network in parallel for rule application and dynamic inference path instantiation. Signatures are integrated within a connectionist semantic network structure whose constraint-relaxation process selects between those newly-instantiated inferences. This allows ROBIN to handle an area of high-level inferencing difficult even for symbolic models, that of resolving multiple constraints from context to select the best interpretation from among several alternative and possibly ambiguous inference paths.
Chapter
I propose to consider the question, “Can machines think?”♣ This should begin with definitions of the meaning of the terms “machine” and “think”. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words “machine” and “think” are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statistical survey such as a Gallup poll.
Article
An approach to connectionist natural language processing is proposed, which is based on hierarchically organized modular parallel distributed processing (PDP) networks and a central lexicon of distributed input/output representations. The modules communicate using these representations, which are global and publicly available in the system. The representations are developed automatically by all networks while they are learning their processing tasks. The resulting representations reflect the regularities in the subtasks, which facilitates robust processing in the face of noise and damage, supports improved generalization, and provides expectations about possible contexts. The lexicon can be extended by cloning new instances of the items, that is, by generating a number of items with known processing properties and distinct identities. This technique combinatorially increases the processing power of the system. The recurrent FGREP module, together with a central lexicon, is used as a basic building block in modeling higher level natural language tasks. A single module is used to form case-role representations of sentences from word-by-word sequential natural language input. A hierarchical organization of four recurrent FGREP modules (the DISPAR system) is trained to produce fully expanded paraphrases of script-based stories, where unmentioned events and role fillers are inferred.
Article
The presented theory views inductive learning as a heuristic search through a space of symbolic descriptions, generated by an application of various inference rules to the initial observational statements. The inference rules include generalization rules, which perform generalizing transformations on descriptions, and conventional truth-preserving deductive rules. The application of the inference rules to descriptions is constrained by problem background knowledge, and guided by criteria evaluating the “quality” of generated inductive assertions. Based on this theory, a general methodology for learning structural descriptions from examples, called Star, is described and illustrated by a problem from the area of conceptual data analysis.
Article
Computer science is the study of the phenomena surrounding computers. The founders of this society understood this very well when they called themselves the Association for Computing Machinery. The machine—not just the hardware, but the programmed, living machine—is the organism we study.
Article
This paper presents a novel learning model CLARION, which is a hybrid model based on the two-level approach proposed by Sun. The model integrates neural, reinforcement, and symbolic learning methods to perform on-line, bottom-up learning (i.e., learning that goes from neural to symbolic representations). The model utilizes both procedural and declarative knowledge (in neural and symbolic representations, respectively), tapping into the synergy of the two types of processes. It was applied to deal with sequential decision tasks. Experiments and analyzes in various ways are reported that shed light on the advantages of the model.
Article
The author presents a condensed exposition of some basic ideas underlying fuzzy logic and describes some representative applications. He covers basic principles; meaning representation and inference; basic rules of inference; and the linguistic variable and its application to fuzzy control.< >