Conference PaperPDF Available

Toward Human-Level Models of Minds

Authors:
  • TalaMind LLC (www.talamind.com)

Abstract

A comparison of Laird, Lebiere, and Rosenbloom's Standard Model of the Mind with the 'TalaMind' approach suggests some implications for computational structure and function of human-like minds, which may contribute to a community consensus about architectures of the mind.
Toward Human-Level Models of Minds
Philip C. Jackson, Jr.
TalaMind LLC
www.talamind.com
dr.phil.jackson@talamind.com
Abstract
A comparison of Laird, Lebiere, and Rosenbloom’s Stand-
ard Model of the Mind with the ‘TalaMind’ approach sug-
gests some implications for computational structure and
function of human-like minds, which may contribute to a
community consensus about architectures of the mind.
1. Overview
The original source for the Standard Model was a 2013
AAAI Symposium with broad representation, reflecting
decades of published research. What was produced at the
Symposium was followed up by focusing on three cogni-
tive architecture systems (ACT-R, Sigma, and Soar) de-
signed to support real-world applications. The authors of
the Standard Model seek “to begin the process of engaging
the international research community in developing what
can be called a standard model of the mind, where the
mind we have in mind here is human-like.”
The TalaMind approach (Jackson 2014) is based on a
review of previous research which leads to exploration of a
research approach for achieving human-level AI. The the-
sis discusses a prototype demonstration system which is far
from being ready for real-world applications, yet illustrates
the potential of the research approach to achieve human-
level AI. The TalaMind thesis discusses some features rel-
evant to human-level AI which involve topics for discus-
sion in further developing the Standard Model.
2. What is a Mind?
The paper presenting the Standard Model suggests a cogni-
tive architecture can be equated with a hypothesis about
the fixed structure of the mind. By presenting a standard
model for cognitive architectures it therefore gives a stand-
Copyright © 2017, TalaMind LLC (www.talamind.com). All rights re-
served. This paper is published in the 2017 AAAI Fall Symposium Series
Technical Reports, FSS-17-05, pp. 371-375.
ard model for the mind. This is consistent with accepting
Newell & Simon’s (1976) hypothesis that “A physical
symbol system has the necessary and sufficient means for
general intelligent action.” Both the Standard Model and
TalaMind include the computational capabilities of physi-
cal symbol systems, yet both also allow non-symbolic pro-
cessing.
However, the Standard Model does not yet directly in-
clude some features people normally ascribe to their
minds. These features involve topics for discussion in fur-
ther developing the Standard Model.
3. Introduction to TalaMind
The TalaMind thesis (Jackson 2014) presents a research
approach toward human-level artificial intelligence. This
involves developing an AI system using a language of
thought (called Tala) based on the unconstrained syntax of
a natural language; designing this system as a collection of
‘executable concepts’ that can create and modify concepts,
expressed in the language of thought, to behave intelligent-
ly in an environment; and using methods from cognitive
linguistics such as mental spaces and conceptual blends for
multiple levels of representation and computation. Propos-
ing a design inspection alternative to the Turing Test, the
thesis discusses ‘higher-level mentalities’ of human intelli-
gence, which include natural language understanding,
higher-level learning, meta-cognition and multi-level rea-
soning, imagination, and consciousness.
‘Higher-level learning’ refers collectively to forms of
learning required for human-level intelligence such as
learning by creating explanations and testing predictions
about new domains based on analogies and metaphors with
previously known domains, reasoning about ways to debug
and improve behaviors and methods, learning and inven-
tion of natural languages and language games, learning or
inventing new representations, and in general, self-
development of new ways of thinking. The phrase ‘higher-
level learning’ is used to distinguish these from lower-level
forms of learning investigated in previous research on ma-
chine learning.
‘Multi-level reasoning’ refers collectively to the reason-
ing capabilities of human-level intelligence, including me-
ta-reasoning, analogical reasoning, causal and purposive
reasoning, abduction, induction, and deduction.
To provide a context for analysis of its approach the the-
sis discusses an architecture called TalaMind for design of
AI systems, adapted from Gärdenfors’ (1995) paper on
inductive inference (see Appendix I). The TalaMind archi-
tecture has three levels, called the linguistic, archetype, and
associative levels. At the linguistic level, the architecture
includes the Tala language, a conceptual framework for
managing concepts expressed in Tala, and conceptual pro-
cesses that operate on concepts in the conceptual frame-
work to produce intelligent behaviors and new concepts.
The archetype level is where cognitive concept structures
are represented using methods such as conceptual spaces,
image schemas, radial categories, etc. The associative level
would typically interface with a real-world environment
and supports connectionism, Bayesian processing, etc. In
general, the thesis is agnostic about research choices at the
archetype and associative levels.
For concision, the term ‘Tala agent’ refers to a system
with a TalaMind architecture. The architecture is open at
the three conceptual levels, e.g. permitting predicate calcu-
lus, conceptual graphs, and other symbolisms in addition to
the Tala language at the linguistic level, and permitting
integration across the three levels, e.g. potential use of
deep neural nets at the linguistic and archetype levels.
The Tala language responds to McCarthy’s 1955 pro-
posal for a formal language that corresponds to English. It
enables a Tala agent to formulate statements about its pro-
gress in solving problems. Tala can represent uncon-
strained, complex English sentences, involving self-
reference, conjecture, and higher-level concepts, with un-
derspecification and semantic annotation. Short English
expressions have short correspondents in Tala, a property
McCarthy sought for a formal language in 1955.
The theoretical basis for Tala is discussed in Chapter 3
of the TalaMind thesis. Section 3.3 argues it is theoretical-
ly possible to use the syntax of a natural language to repre-
sent meaning in a conceptual language and to reason di-
rectly with natural language syntax, at the linguistic level
of the TalaMind architecture. Chapter 4 discusses theoreti-
cal objections, including McCarthy's arguments in 2008
that a language of thought should be based on mathemati-
cal logic instead of natural language.
Chapter 3’s analysis shows the TalaMind approach can
address theoretical questions not easily addressed by more
conventional approaches. For instance, it supports reason-
ing in mathematical contexts, but also supports reasoning
about people who have self-contradictory beliefs. Tala
provides a language for reasoning with underspecification
and for reasoning with sentences that have meaning yet
which also have nonsensical interpretations. Tala sentences
can declaratively describe recursive mutual knowledge.
Tala facilitates representation and conceptual processing
for higher-level mentalities, such as learning by analogical,
causal and purposive reasoning, learning by self-
programming, and imagination via conceptual blends.
4. Discussion Topics for the Standard Model
The TalaMind thesis discusses four features relevant to
human-level AI which involve topics for discussion in fur-
ther developing the Standard Model.
4.1 Artificial Consciousness
The TalaMind thesis accepts the objection by some AI
skeptics that a system which is not aware of what it is do-
ing, and does not have some awareness of itself, cannot be
considered to have human-level intelligence. The perspec-
tive of the thesis is that it is both necessary and possible for
a system to demonstrate at least some aspects of con-
sciousness, to achieve human-level AI. However, the thesis
does not claim AI systems will achieve the subjective ex-
perience humans have of consciousness.
The thesis adapts the “axioms of being conscious” pro-
posed by Aleksander and Morton (2007) for research on
artificial consciousness. To claim a system achieves artifi-
cial consciousness it should demonstrate:
Observation of an external environment.
Observation of itself in relation to the external environ-
ment.
Observation of internal thoughts.
Observation of time: of the present, the past, and poten-
tial futures.
Observation of hypothetical or imaginative thoughts.
Reflective observation: Observation of having observa-
tions.
To observe these things, a TalaMind system should sup-
port representations of them, and support processing such
representations. The TalaMind prototype illustrates how a
TalaMind architecture could support artificial conscious-
ness.
Artificial consciousness is permitted though not directly
addressed in the Standard Model, since a reflective archi-
tecture is not part of the model. This was a topic over
which a consensus was not reached at this point, and there-
fore it was omitted from the model. (Rosenbloom 2017)
Some form of artificial consciousness may be required
for a consensus by scientists that an artificial intelligence
has a human-level mind. People ascribe consciousness to
their minds, and may expect it in artificial minds. So, this
is a topic for future discussion in developing the Standard
Model.
4.2 Society of Mind
The TalaMind hypotheses do not require a society of mind
architecture, but it is consistent with the hypotheses and
natural to implement a society of mind at the linguistic
level of a TalaMind architecture. In the TalaMind proto-
type, a Tala agent has a society of mind in which subagents
communicate by exchanging Tala concepts. Thus the Tal-
aMind prototype simulates self-talk (mental discourse)
within a Tala agent. Self-talk is an important feature people
normally ascribe to their own minds.
The Standard Model includes massive parallelism within
and across its modules. Whether it can support a society of
mind depends on whether the parallelism included within
procedural memory is adequate. However, there is not yet
a consensus that society of mind is a useful paradigm for
constructing general AI systems. (Rosenbloom 2017)
So, this is a topic for future consideration in developing
the Standard Model.
4.3 Nested Conceptual Simulation
To support reasoning about potential future events, and
counterfactual reasoning about past and present events, a
Tala agent’s conceptual framework should support creation
and conceptual processing of hypothetical scenarios of
events. A hypothetical context may include models of oth-
er agent’s beliefs and goals, to support simulating what
they may think and do. The TalaMind thesis uses the term
‘nested conceptual simulation’ to refer to an agent’s con-
ceptual processing of hypothetical scenarios, with possible
branching of scenarios based on alternative events, such as
choices of simulated agents.
In the TalaMind prototype, the ‘farmer’s dilemma’
simulation shows conceptual processes in which two Tala
agents (Leo and Ben) imagine what will happen in hypo-
thetical situations, using nested conceptual simulation. Leo
imagines what Ben may think and do, and vice versa. This
amounts to a Theory of Mind capability within a TalaMind
architecture, i.e. the ability of a Tala agent to consider it-
self and other Tala agents as having minds with beliefs,
desires, different possible choices, etc.
Theory of Mind may require additional architectural
mechanisms in the Standard Model, but that is not clear at
this point. The three reference architectures for the Stand-
ard Model may support Theory of Mind without specific
mechanisms for it, but that is also a topic for further dis-
cussion. (Rosenbloom 2017)
4.4 Self-Programming
People often think about how to change and improve pro-
cesses. Hence a conceptual language for a system with
human-level AI must be able to represent concepts that
describe how to modify processes. In the TalaMind ap-
proach executable concepts can describe how to modify
executable concepts. The TalaMind demonstration system
illustrates this in a story simulation where a Tala agent
reasons about how to change its process for making bread,
the process being represented by an executable concept.
This indicates the groundwork for self-programming, an
important form of higher-level learning necessary for hu-
man-level AI.
The Standard Model specifies procedural memory has
no direct access to itself. This prevents procedures from
directly modifying procedures. However, this may not rule
out self-programming: Soar and ACT-R may have demon-
strated self-programming by reasoning about declarative
representations of procedures in working memory and then
creating corresponding procedures in procedural memory.
(Rosenbloom 2017)
So, this is a topic for further discussion in developing
the Standard Model.
5. Limitations of TalaMind
Of course, the TalaMind thesis does not claim to actually
achieve human-level AI, or even to identify all the higher-
level mentalities necessary for human-level AI. It only
makes a start in this direction, and identifies many areas
for future research to develop the approach. These include
areas previously studied by others which were outside the
scope of the thesis, such as ontology, common sense
knowledge, spatial reasoning and visualization, etc. Thesis
section 7.8 presents arguments in favor of the TalaMind
approach over other approaches for achieving human-level
AI. Involvement of the AI research community is needed
for the TalaMind approach to succeed.
6. Conclusion
The TalaMind approach may be considered as a direction
within the Standard Model toward Newell’s vision of “a
science of man adequate in power and commensurate with
his complexity”, and toward the vision of McCarthy, Min-
sky, Rochester, and Shannon who conjectured “every as-
pect of learning or any other feature of intelligence can in
principle be so precisely described that a machine can be
made to simulate it.”
What Turing wrote in 1950 is still true, “We can only
see a short distance ahead, but we can see plenty there that
needs to be done.” Yet we have travelled far over six dec-
ades, and can now envision architectures for human-level
artificial minds.
Acknowledgements
I thank Paul Rosenbloom for information referenced in
Section 4 of this paper, John Laird for requesting the dis-
cussion in Appendix II, and an anonymous reviewer for
comments motivating Appendix I.
Appendices
I. TalaMind’s Relation to Gärdenfors (1995)
Gärdenfors (1995) discussed three ways of characterizing
or describing observations, which he called the linguistic,
conceptual, and subconceptual levels of inductive infer-
ence.
It is most accurate to say the TalaMind approach adapts
(rather than adopts) Gärdenfors’ levels by considering all
of them to be conceptual levels, where concepts may be
represented in different ways:
1) Linguistically
2) As cognitive categories (using methods such as
conceptual spaces, image schemas, radial catego-
ries, etc.)
3) Associatively (e.g. via connectionism).
Hence TalaMind's three architectural levels are called
the linguistic, archetype, and associative levels, to avoid
saying only one level is conceptual.
Gärdenfors’ insights remain relevant, even though his
discussion of the linguistic level focused on descriptions
using formal languages. However, (Gärdenfors 1995) did
not discuss support for the TalaMind hypotheses at the
linguistic level, and did not include elements of the linguis-
tic level discussed in the TalaMind thesis, i.e. the Tala lan-
guage, a conceptual framework for managing concepts
expressed in Tala, and conceptual processes that operate on
concepts in the conceptual framework to produce intelli-
gent behaviors and new concepts. Thus (Gärdenfors 1995)
did not discuss higher-level learning and other higher-level
mentalities, nor aspects of minds discussed in the present
paper.
II. Standard Model’s Relation to TalaMind Levels
To help identify potential areas for development of the
Standard Model (SM), the following paragraphs discuss
how SM is related to TalaMind’s three conceptual levels.
SM’s processing involves symbolic data structures and
production rules with pattern-matching in cognitive cycles.
These data structures and production rules are symbolic
expressions and would be at the TalaMind linguistic con-
ceptual level. TalaMind is open to such symbolic expres-
sions, though to achieve human-level AI the thesis (Jack-
son 2014) advocates a language (Tala) for conceptual ex-
pressions using natural language syntax, and generalizes
production rules as executable concepts expressed in Tala.
In the TalaMind prototype these are supported with cogni-
tive cycles for pattern-matching of Tala expressions. Use
of Tala and executable concepts may be an area for future
development of SM.
SM says “Declarative memory is a long-term store for
facts and concepts. It is structured as a persistent graph of
symbolic relations, with metadata reflecting attributes such
as recency and frequency of (co-)occurrence..." (Laird,
Lebiere, and Rosenbloom 2017). This suggests symbolic
relations are the primary mode for representing concepts in
SM. It is not clear whether SM provides an archetype level
that models cognitive categories using methods such as
such as conceptual spaces, image schemas, radial catego-
ries, etc., or use of deep neural nets at the archetype level.
Support of an archetype level may be an area for future
development of SM.
SM’s metadata about symbolic expressions could exist
at the TalaMind linguistic level, though such metadata is
not discussed in the thesis. Tala expressions can refer via
pointers to other Tala expressions and represent statements
about other expressions, supporting meta-statements in
natural language syntax, which could be an area for future
development of SM.
SM’s declarative learning via acquisition of facts could
occur at the TalaMind linguistic level, and also be support-
ed via SM’s perception component at lower levels, dis-
cussed below. At the linguistic level TalaMind focuses on
higher-level learning needed for human-level AI, and is not
limited to acquisition of facts or tuning of metadata. Ex-
amples of higher-level learning of declarative knowledge
include: Learning by creating explanations and testing pre-
dictions, using causal and purposive reasoning; Learning
about new domains by developing analogies and meta-
phors with previously known domains. These forms of
learning may be involved in discovery of scientific theories
and predictions. (To be clear, much work remains to im-
plement higher-level declarative learning in a functioning
TalaMind system.) Higher-level learning of declarative
knowledge could be an area for future development of SM.
SM’s procedural learning involves reinforcement learn-
ing and procedural composition. Reinforcement learning
affects weights for selecting actions and procedural com-
position includes composition of rules and chunking. Since
rules are symbolic expressions at the linguistic level, this
suggests procedural learning in SM would occur primarily
at the linguistic level though perhaps lower-level processes
may be involved.
The TalaMind thesis discusses procedural learning at the
linguistic level. As noted in Section 4.4 above, self-
programming is an important form of higher-level learning.
The TalaMind approach supports this by allowing executa-
ble concepts to create and modify executable concepts. The
TalaMind demonstration prototype illustrates the potential
for self-programming in a story simulation where a Tala
agent discovers and improves a process for making bread.
(Much work remains to implement self-programming in a
functioning TalaMind system.) This form of procedural
learning is a potential area for future development of SM.
SM's perception component could be an element at the
TalaMind associative level, which in TalaMind would typ-
ically interface with a real-world environment. This corre-
sponds to SM’s statements that "Perception converts exter-
nal signals into symbols and relations..." and “The standard
model … does not embody any commitments as to the
internal representation (or processing) of information with-
in perceptual modules, although it is assumed to be pre-
dominantly non-symbolic in nature, and to include learn-
ing.” (Laird, Lebiere, and Rosenbloom 2017)
Likewise, SM’s motor component may also be an ele-
ment at the associative level. SM’s conversion of symbol
structures into external actions is envisioned in TalaMind
to happen at an interface into the associative level.
SM stipulates “More complex forms of learning involve
combinations of the fixed set of simpler forms of learning”.
Table 1 in (Laird, Lebiere, and Rosenbloom 2017) indi-
cates the fixed set is procedural learning at least via rein-
forcement and composition, plus declarative learning via
acquisition of facts and metadata tuning. It seems clear this
fixed set would not support the forms of higher-level learn-
ing envisioned in the TalaMind approach. This also indi-
cates higher-level learning as a potential area for future
development of the Standard Model.
References
Aleksander, I., and Morton, H. 2007. Depictive Architectures for
Synthetic Phenomenology. In Artificial Consciousness, 67-81, ed.
Chella, A. and Manzotti, R. Imprint Academic.
Fauconnier, G. 1994. Mental Spaces: Aspects of Meaning Con-
struction in Natural Language. Cambridge University Press.
Fauconnier, G. and Turner, M. 2002. The Way We Think – Con-
ceptual Blending and the Mind’s Hidden Complexities. Basic
Books, New York.
Gärdenfors, P. 1995. Three levels of inductive inference. Studies
in Logic and the Foundations of Mathematics, 134, 427-449.
Elsevier.
Jackson, P. C. 2014. Toward Human-Level Artificial Intelligence
Representation and Computation of Meaning in Natural Lan-
guage. Ph.D. Thesis, Tilburg University, The Netherlands.
Laird, J. E., Lebiere, C. and Rosenbloom, P. S. 2017. A Standard
Model of the Mind: Toward a Common Computational Frame-
work across Artificial Intelligence, Cognitive Science, Neurosci-
ence, and Robotics. AI Magazine, to appear.
McCarthy, J., Minsky, M. L., Rochester, N. and Shannon, C. E.
1955. A Proposal for the Dartmouth Summer Research Project on
Artificial Intelligence. In Artificial Intelligence: Critical Concepts
in Cognitive Science, 2, 44-53, ed. Chrisley, R. and Begeer, S.
2000. Routledge Publishing.
McCarthy, J. 2008. The well-designed child. Artificial Intelli-
gence, 172, 18, 2003-2014.
Newell, A. 1973. You Can’t Play 20 Questions with Nature and
Win: Projective Comments on the Papers of this Symposium. In
Visual Information Processing, 283-310, ed. Chase, W. G. Aca-
demic Press, New York.
Newell, A. and Simon, H. A. 1976. Computer Science as Empiri-
cal Inquiry: Symbols and Search. Communications of the ACM,
19, 3, 113126.
Newell, A. 1990. Unified Theories of Cognition. Harvard Univer-
sity Press.
Rosenbloom, P. S. 2017. Personal communication.
Turing, A. M. 1950. Computing machinery and intelligence.
Mind, 59, 433 - 460.
... This involves developing an AI system using a language of thought (called Tala) based on the unconstrained syntax of a natural language, and designing the system as a collection of 'executable concepts' that can create and modify concepts, expressed in the language of thought, to behave intelligently in an environment. A brief introduction to the TalaMind approach is given in (Jackson 2017). ...
... Discussions during and after the 2017 AAAI Fall Symposium suggested it could be helpful to write this Postscript, further discussing some topics in (Jackson 2017). ...
... All rights reserved. 1 Although this was stated in (Jackson 2017), I think I forgot to mention it in a brief talk at the Symposium. ...
Data
Full-text available
This TalaMind White Paper further discusses some topics in (Jackson 2017): reasoning with natural language syntax; interlinguas and generalized societies of mind; self-talk; artificial consciousness and the Hard Problem of consciousness.
... Metascientific formalizations might in future support analogical reasoning across domains of science, to apply ideas in one science to the development of another science. 5 This process may also expand the body of metascientific knowledge, and demonstrate the utility of metascience. ...
... Viz. https://en.wikipedia.org/wiki/Scientific_method.5 Analogies have been important in the history of science. ...
Conference Paper
Full-text available
Rosenbloom gave reasons why Computing should be considered as a fourth great domain of science, along with the Physical sciences, Life sciences, and Social sciences. This paper considers Metascience as the future, fifth great domain of science, and discusses reasons why metascience may be closely related to metacognition in human intelligence and human-level artificial intelligence, suggesting that the representation and processing which could support an AI system’s metacognition could also support an AI system reasoning metascientifically about domains of science.
... This might use mechanisms corresponding to an 'economy of mind' [60]. In addition, recursively nested mental models [30], and a 'natural language of thought' [29] can be tools for representation and implementation of metacognition. ...
Conference Paper
Full-text available
This paper provides a starting point for the development of metacognition in a common model of cognition. It identifies significant theoretical work on metacognition from multiple disciplines that the authors believe worthy of consideration. After first defining cognition and metacognition, we outline three general categories of metacognition, provide an initial list of its main components, consider the more difficult problem of consciousness, and present examples of prominent artificial systems that have implemented metacognitive components. Finally, we identify pressing design issues for the future.
... The TalaMind architecture is actually a broad class of architectures, open to further design choices at each level ( §1.5, §2.2.2). Comparisons of TalaMind to the Common Model of Cognition are given by (Jackson, 2018c) and (Jackson, 2017). ...
Conference Paper
Full-text available
This paper discusses theoretical and practical limitations for Newell's (1982) definition of the 'knowledge level'. An alternative definition is given for an 'intelligence level', corresponding to human-level artificial intelligence. Topics for research and development in the intelligence level are discussed, illustrated by the TalaMind approach to human-level AI (Jackson, 2014).
... Linde's and Rosenbloom's questions are of interest to me, relevant to my previous paper [15] about the Common Model, and relevant to the research approach I advocate toward human-level AI [14]. So, the following pages give my answers to these questions, and to some additional questions asked by anonymous reviewers of a draft version of this paper. ...
Article
Full-text available
(This paper is available online at https://doi.org/10.1016/j.procs.2018.11.051 )This paper discusses how natural language could be supported in further developing the Common Model of Cognition following the TalaMind approach toward human-level artificial intelligence. These thoughts are presented as answers to questions posed by Peter Lindes, Paul Rosenbloom, and reviewers of this paper, followed by a description of the TalaMind demonstration system, and a general discussion of theoretical and strategic issues for the Common Model of Cognition.
... This might use mechanisms corresponding to an 'economy of mind' [60]. In addition, recursively nested mental models [30], and a 'natural language of thought' [29] can be tools for representation and implementation of metacognition. ...
Preprint
Full-text available
This paper provides a starting point for the development of metacognition in a common model of cognition. It identifies significant theoretical work on metacognition from multiple disciplines that the authors believe worthy of consideration. After first defining cognition and metacognition, we outline three general categories of metacognition, provide an initial list of its main components, consider the more difficult problem of consciousness, and present examples of prominent artificial systems that have implemented metacognitive components. Finally, we identify pressing design issues for the future.
... The main merit of such proposal lies in the adoption of the representational component of Conceptual Spaces [9] integrated with other neural-symbolic formalisms. The benefits coming from the integration of the Conceptual Spaces framework as an intercommunication layer between different types of representations in a general cognitive architecture has been recently pointed out in [17] and, with respect to the CMC, it has been acknowledged both in [7] and [11]. Recently, within the representation framework adopted in such system, it has been proposed a unifying categorization algorithm able to reconcile all the different theories of typicality about conceptual reasoning available in the psychological literature (i.e. ...
Article
Full-text available
We present the input to the discussion about the computational framework known as Common Model of Cognition (CMC) from the working group dealing with the knowledge/rational/social levels. In particular, we present a list of the higher level constraints that should be addressed within such a general framework.
Technical Report
Full-text available
Note: Rather than this paper, I (the author) recommend reading the more recent, published paper "On Achieving Human-Level Knowledge Representation by Developing a Natural Language of Thought", which contains additional discussions. -- PCJ 9/21/21 -- What is the nature of knowledge representation needed for human-level artificial intelligence? This position paper contends that to achieve human-level AI, a system architecture for human-level knowledge representation would benefit from a neuro-symbolic approach combining deep neural networks with a ‘natural language of thought’, and be greatly handicapped if relying only on formal logic systems.
Data
Full-text available
This white paper considers a counter-argument and caveat to the position of a previous paper (Jackson 2018) that a purely symbolic artificial consciousness is not equivalent to human consciousness and there need not be an ethical problem in switching off a purely symbolic artificial conscious-ness. The counter-argument is based on Newell and Simon’s Physical Symbol System Hypothesis, and leads to discussion of several topics, including whether a human-level AI can terminate its simulations of other minds without committing ‘mind-crimes’; whether human-level AI can be beneficial to humans without enslaving artificial minds; and some of the ethical issues for uploading human minds to computers. This paper concludes by summarizing reasons why the TalaMind approach (Jackson 2014) could be important for beneficial human-level AI and superintelligence, the openness of TalaMind to other research approaches, and topics for future research.
Conference Paper
Full-text available
This paper considers ethical, philosophical, and technical topics related to achieving beneficial human-level AI and superintelligence. Human-level AI need not be human-identical: The concept of self-preservation could be quite different for a human-level AI, and an AI system could be willing to sacrifice itself to save human life. Artificial consciousness need not be equivalent to human consciousness, and there need not be an ethical problem in switching off a purely symbolic artificial consciousness. The possibility of achieving superintelligence is discussed, including potential for 'conceptual gulfs' with humans, which may be bridged. Completeness conjectures are given for the 'TalaMind' approach to emulate human intelligence, and for the ability of human intelligence to understand the universe. The possibility and nature of strong vs. weak superintelligence are discussed. Two paths to superintelligence are described: The first path could be catastrophically harmful to humanity and life in general, perhaps leading to extinction events. The second path should improve our ability to achieve beneficial superintelligence. Human-level AI and superintelligence may be necessary for the survival and prosperity of humanity .
Article
Full-text available
The purpose of this article is to begin the process of engaging the international research community in developing what can be called a standard model of the mind, where the mind we have in mind here is human-like. The notion of a standard model has its roots in physics, where over more than a half-century the international community has developed and tested a standard model that combines much of what is known about particles. This model is assumed to be internally consistent, yet still have major gaps. Its function is to serve as a cumulative reference point for the field while also driving efforts to both extend and break it.
Article
Full-text available
Is synthetic phenomenology a valid concept? In approaching consciousness from a computational point of view, the question of phenomenology is not often explicitly addressed. In this paper we re- view the use of phenomenology as a philosophical and a cognitive construct in order to have a meaningful transfer of the concept into the computational domain. Two architectures are discussed with respect to these definitions: our 'kernel, axiomatic' structure and the widely quoted 'Global Workspace' scheme. The conclusion suggests that architectures with phenomenal properties genu- inely address the issue of modelling consciousness and indicate and the way that a machine with synthetic phenomenology may benefit from the property
Thesis
Note: This thesis is now available as a book, published in 2019 by Dover Publications. This doctoral thesis presents a novel research approach toward human-level artificial intelligence. The approach involves developing an AI system using a language of thought based on the unconstrained syntax of a natural language; designing this system as a collection of concepts that can create and modify concepts, expressed in the language of thought, to behave intelligently in an environment; and using methods from cognitive linguistics such as mental spaces and conceptual blends for multiple levels of mental representation and computation. Proposing a design inspection alternative to the Turing Test, these pages discuss ‘higherlevel mentalities’ of human intelligence, which include natural language understanding, higher-level forms of learning and reasoning, imagination, and consciousness. This thesis endeavors to address all the major theoretical issues and objections that might be raised against its approach, or against the possibility of achieving human-level AI in principle. No insurmountable objections are identified, and arguments refuting several objections are presented. This thesis describes the design of a prototype demonstration system, and discusses processing within the system that illustrates the potential of the research approach to achieve human-level AI. This thesis cannot claim to actually achieve human-level AI, it can only present an approach that may eventually reach this goal.
Article
This article is inspired by recent psychological studies confirming that a child is not born a blank slate but has important innate capabilities. An important part of the “learning” required to deal with the three-dimensional world of objects, processes, and other beings was done by evolution. Each child need not do this learning itself. By the 1950s there were already proposals to advance artificial intelligence by building a child machine that would learn from experience just as a human child does. What innate knowledge the child machine should be equipped with was ignored. I suppose the child machine was supposed to be a blank slate. Whatever innate knowledge a human baby may possess, we are interested in a well-designed that has all we can give it. To some extent, this paper is an exercise in wishful thinking. The innate mental structure that equips a child to interact successfully with the world includes more than the universal grammar of linguistic syntax postulated by Noam Chomsky. The world itself has structures, and nature has evolved brains with ways of recognizing them and representing information about them. For example, objects continue to exist when not being perceived, and children (and dogs) are very likely “designed” to interpret sensory inputs in terms of such persistent objects. Moreover, objects usually move continuously, passing through intermediate points, and perceiving motion that way may also be innate. What a child learns about the world is based on its innate mental structure. This article concerns designing adequate mental structures including a language of thought. This designer stance applies to designing robots, but we also hope it will help understand universal human mental structures. We consider what structures would be useful and how the innateness of a few of the structures might be tested experimentally in humans and animals. In the course of its existence we'll want our robot child to change. Some of the changes will be development, others learning. However, this article mainly takes a static view, because we don't know how to treat growth and development and can do only a little with learning.