Chapter

Criticism of the «Chinese Room» by J. Searle from the Position of a Hybrid Model for the Design of Artificial Cognitive Agents

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The article presents a review of the phenomenon of understanding the meaning of the natural language and, more broadly, the meaning of the situation in which the cognitive agent is located, taking into account the context. A specific definition of understanding is given, which is at the intersection of neurophysiology, information theory and cybernetics. The scheme of abstract architecture of a cognitive agent (of arbitrary nature) is given, concerning which it is stated that an agent with such architecture can understand in the sense described in the work. It also provides a critique of J. Searle’s mental experiment «The Chinese Room» from the point of view of the construction of artificial cognitive agents implemented within a hybrid paradigm of artificial intelligence. The novelty of the presented work is based on the application of the author’s methodological approach to the construction of artificial cognitive agents, while in the framework of this approach is considered not just the perception of external stimuli from the environment, but the philosophical problem of «understanding» artificial cognitive agent of its sensory inputs. The relevance of the work follows from the renewed interest of the scientific community in the theme of Strong Artificial Intelligence (or AGI). The author’s contribution to the considered theme consists in complex consideration from different points of view of the theme of understanding perceived by artificial cognitive agents with the formation of prerequisites for the development of new models and the theory of understanding within the framework of artificial intelligence, which in the future will help to build a holistic theory of the nature of human mind. The article will be interesting for specialists working in the field of artificial intellectual systems and cognitive agents construction, as well as for scientists from other scientific fields — first of all, philosophy, neurophysiology, psychology etc. KeywordsPhilosophy of mindPhilosophy of artificial intelligenceChinese roomSemanticsPerceptionUnderstandingLearningMachine learningArtificial intelligenceStrong artificial intelligence

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The article describes the author’s proposal on cognitive architecture for the development of a general-level artificial intelligent agent («strong» artificial intelligence). New principles for the development of such an architecture are proposed — a hybrid approach in artificial intelligence and bionics. The architecture diagram of the proposed solution is given and descriptions of possible areas of application are described. Strong artificial intelligence is a technical solution that can solve arbitrary cognitive tasks available to humans (human-level artificial intelligence) and even surpass the capabilities of human intelligence (artificial superintelligence). The fields of application of strong artificial intelligence are limitless — from solving current problems facing the human to completely new problems that are not yet available to human civilization or are still waiting for their discoverer. The novelty of the work lies in the author’s approach to the construction of cognitive architecture, which has absorbed the results of many years of research in the field of artificial intelligence and the results of the analysis of cognitive architectures of other researchers.
Article
Full-text available
This paper presents a model of hierarchical associative memory, which can be used as a basis for building artificial cognitive agents of general purpose. With the help of this model, one of the most important problems of modern machine learning and artificial intelligence in general can be solved — the ability for a cognitive agent to use "life experience" to process the context of the situation in which he was, is and, possibly, will be. This model is applicable for artificial cognitive agents functioning both in specially designed virtual worlds and in objective reality. The use of hierarchical associative memory as a long-term memory of artificial cognitive agents will allow the latter to effectively navigate both in the general knowledge accumulated by mankind and in their life experience. The novelty of the presented work is based on the author’s approach to the construction of context-dependent artificial cognitive agents using an interdisciplinary approach, in particular, based on the achievements of artificial intelligence, cognitology, neurophysiology, psychology and sociology. The relevance of this work is based on the keen interest of the scientific community and the high social demand for the creation of general-level artificial intelligence systems. Associative hierarchical memory, based on the use of an approach similar to the hypercolumns of the human cerebral cortex, is becoming one of the important components of an artificial intelligent agent of the general level. The article will be of interest to all researchers working in the field of building artificial cognitive agents and related fields.
Article
Full-text available
This discussion article attempts to consider the problem of recognition and differentiation of the so-called “philosophical zombies” in order to build a set of operational criteria for determining the subjectivity of artificial intelligent systems. This task can be considered as one of the possible ways to solve the "difficult problem of consciousness." Despite the fact that the proposed approach alone does not solve the “difficult problem”, it reveals certain aspects of neurophysiology, cybernetics and information theory on the way to solving it. As a research methodology, an interdisciplinary approach to the study of the subject of research and the merging of the results of the review in terms of four theories into single conclusions are used. The relevance of this task stems from the increasing use of artificial cognitive agents in human life - where exactly is the boundary that separates a rational being from an artificial cognitive agent, even if it has a mind of a different nature. According to the author of the article, it is the presence of phenomenological consciousness that gives the object subjectivity, so the development of more and more complex artificial cognitive agents (artificial intelligence systems) will ultimately lead to an acute posing of this issue for a comprehensive discussion. The article attempts to introduce a procedure for recognizing philosophical zombies and its limitations, as well as reflecting on whether artificial cognitive agents can get qualia. The article will be interesting to everyone who is keenly interested in artificial intelligence in all its aspects, as well as the philosophy of consciousness.
Article
Full-text available
The introduction of AI and other digital technologies is hindered by the low level of citizens' trust in algorithms and new technologies in principle, as well as the lack of a clear ethical framework in the application of AI.
Conference Paper
Full-text available
How does the human brain work? How can we make sentences and make sentences? These are the questions dealt with already many scientists and several companies developing artificial intelligence. This article presents a study on language networks. At the beginning is the research of the works that have already dealt with this issue. In the next part, the author deals with the application of language networks and semantic networks.
Article
Full-text available
Increasing reliance on skill-intensive subsistence strategies appears to be a hallmark of human evolution, with wide-ranging implications for sociality, brain size, life-history and cognitive adaptations. These parameters describe a human technological niche reliant on efficient intergenerational reproduction of increasingly complex foraging techniques, including especially the production and effective use of tools. The archaeological record provides a valuable source of evidence for tracing the emergence of this modern human condition, but interpretation of this evidence remains challenging and controversial. Application of methods from psychology and neuroscience to Palaeolithic tool-making experiments offers new avenues for establishing empirical links between technological behaviours, neurocognitive substrates and archaeologically observable material residues. Here we review recent progress and highlight key challenges for the future.
Article
Full-text available
Novel experience and learning new skills are known as modulators of brain function. Advances in non-invasive brain imaging have provided new insight into structural and functional reorganization associated with skill learning and expertise. Especially, significant imaging evidences come from the domains of sports and music. Data from in vivo imaging studies in sports and music have provided vital information on plausible neural substrates contributing to brain reorganization underlying skill acquisition in humans. This mini review will attempt to take a narrow snapshot of imaging findings demonstrating functional and structural plasticity that mediate skill learning and expertise while identifying converging areas of interest and possible avenues for future research.
Article
Artificial Intelligence (AI) is an area of research driven by innovation and development that culminates in computers, machines with human-like intelligence characterized by cognitive ability, learnability, adaptability and decision-making ability. The study found that AI is widely adopted and used in education, especially by educational institutions, in various forms. This article reviewed articles by various scientists from different countries. The paper discusses the prospects for the application of artificial intelligence and machine learning technologies in education and in everyday life. The history of the development of artificial intelligence is described, technologies of machine learning and neural networks are analyzed. An overview of already implemented projects for the use of artificial intelligence is given, a forecast of the most promising, according to the authors, directions for the development of artificial intelligence technologies for the next period is given. This article provides an analysis of how educational research is being transformed into an experimental science. AI is combined with the study of science into new ‘digital laboratories’, in which ownership of data, as well as power and authority in the production of educational knowledge, are redistributed between research complexes of computers and scientific knowledge.
Article
Deep learning paradigm allowed computer scientists to take a fresh look at the format of knowledge representation and assimilation. Studies of artificial analogs of neurons and synaptic connections of the brain have indicated many significant regularities in the opposition of cognitive "medium" and cognitive information. This understanding gave a new impetus to the previously developed concept of embodied cognition in various branches of artificial intelligence. "Embodiment" is usually understood as a combination of cognitive and substrate components. At the same time, there remain world-systemic connections that involve a broader context in the dynamics of correlation between the subject of cognition (cognitive agent, bounded rationality) and the environment. The concept of embodied cognition assumes a clash of the range of cognitive systems, built upon different infogenesis and infotectonics (for example, different computing platforms and degrees of agency). The cross-modal Turing test is supposed to be a universal communication interface that allows “message” and “medium” of embodied cognitive agent to test each other. The use of reciprocal, environments and systems will allow a sequential cross-modal Turing test for two competing modules. Such an approach may turn out to be decisive in cyber-physical systems, which are born at the junction of diverse technical-scientific engineering solutions, as well as in systems that require a high learning rate and model correction. In neural network practice, this approach can be effective in the field of transfer learning, in which a pre-trained fragment of the network can be correlated with fundamentally irrelevant (for a neural network) data.
Chapter
Artificial-intelligence and converging technologies currently setting global post-industrial (economic, cultural, social, and manufacturing) trends, such as the Internet of things, penetrating computing, physical-digital communication, smart medicine and smart education replacing step by step the traditional forms of communications, logistics, management, data logging, and automation.
Article
In this My Word, Joseph LeDoux describes how his work as a graduate student got him interested in human consciousness. Although he has not studied this topic since 1970s, he never stopped thinking and writing about it during his four-decade career exploring how non-conscious processes involving the amygdala detect and respond to danger. Here, he tells us what is on his mind about consciousness these days.
Chapter
The Chinese room argument is a refutation of ‘strong artificial intelligence’ (strong AI), the view that an appropriately programmed digital computer capable of passing the Turing test would thereby have mental states and a mind in the same sense in which human beings have mental states and a mind. Strong AI is distinguished from weak AI, which is the view that the computer is a useful tool in studying the mind, just as it is a useful tool in other disciplines ranging from molecular biology to weather prediction.
Chapter
I propose to consider the question, “Can machines think?”♣ This should begin with definitions of the meaning of the terms “machine” and “think”. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words “machine” and “think” are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, “Can machines think?” is to be sought in a statistical survey such as a Gallup poll.
Article
This article can be viewed as an attempt to explore the consequences of two propositions. (1) Intentionality in human beings (and animals) is a product of causal features of the brain. I assume this is an empirical fact about the actual causal relations between mental processes and brains. It says simply that certain brain processes are sufficient for intentionality. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality. The main argument of this paper is directed at establishing this claim. The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences: (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4. “Could a machine think?” On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains. And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking.
Kaplan and Sadock’s Comprehensive Textbook of Psychiatry
  • D S Melchitzky
  • D A Lewis
) (editors-in-chief)
  • R Schmidt
  • G Tevs
Hybrid scheme for constructing artificial intelligent systems
  • R V Dushkin
  • M G Andronov
Dushkin R. V., Andronov M. G. (2019) Hybrid scheme for constructing artificial intelligent systems // Cybernetics and Programming. -2019. -No. 4. -pp. 51-58. -DOI: 10.25136/2644-5522.2019.4.29809. -URL: http://e-notabene.ru/kp/article_29809.html.
Consciousness Explained
  • D C Dennett
  • L Allen
Dennett D. C., Allen L. (ed.) (1991) Consciousness Explained. -The Penguin Press, 1991. -551 p. -ISBN 978-0-7139-9037-9.
2017) 1.2 Functional Neuroanatomy // Kaplan and Sadock's Comprehensive Textbook of Psychiatry
  • D S Melchitzky
  • D A Lewis
Melchitzky D. S., Lewis D. A. (2017) 1.2 Functional Neuroanatomy // Kaplan and Sadock's Comprehensive Textbook of Psychiatry: in 2 vol. / Ed. Benjamin J. Sadock, Virginia A. Sadock, Pedro Ruiz. -Issue 10. -Lippincott Williams & Wilkins, 2017. -Thalamus. -158-170 pp. -ISBN 978-1451100471.