Figure 1 - uploaded by Igor Aleksander
Content may be subject to copyright.
Source publication
Is synthetic phenomenology a valid concept? In approaching consciousness from a computational point of view, the question of phenomenology is not often explicitly addressed. In this paper we re- view the use of phenomenology as a philosophical and a cognitive construct in order to have a meaningful transfer of the concept into the computational dom...
Similar publications
From a conceptual point of view, time is independent of its experience. That is: it can be given a conceptual description of time without any reference to terms related to the subjective consciousness of time. But concerning a phenomenology of that subjective experience of time, it can it be showed that such experience of time is, in itself, tempor...
The paper presents and discusses phenomenological facts about perceptual spaces and percepts, but ends with a few thoughts about possible causal explanations of such spaces. The overarching double-sided hypothesis claims that - from a phenomenological point of view - each individual animal has at each consciously perceived moment of time a sense-mo...
The Psycho-Acoustical Transitional (PAT) session is a completely defined setting from a mathematical-physical point of view that promotes large-scale functional connectivity between neural populations. Phenomenologically this phenomenon can be described as a modulation of the state of consciousness, inducing the unfolding of a clinical ecstatic sta...
I am aware that there’s an internal system organising the particular configurations of phenomenal field presenting in response to specific situations. One of such situation is looking into ‘vast spaces’.
It takes the brain some time to organise in response to a novel/unusual situation and it’s the unusual demands made on the visual system in tandem...
It is not yet well understood how we become conscious of the presence of other people as being other subjects in their own right. Developmental and phenomenological approaches are converging on a relational hypothesis: my perception of a “you” is primarily constituted by another subject’s attention being directed toward “me.” This is particularly t...
Citations
... Jackson (2019) uses the five-axiom definition of consciousness (Aleksander and Morton, 2007) in his TalaMind architecture of human-level AI. In contrast, my approach to consciousness will be based on the notion that displayed consciousness is consciousness. ...
... So, at least some aspects of consciousness are necessary for a system to demonstrate humanlevel intelligence. Jackson (2014, §3.7.6) discussed how a human-level AI could perform observations that would satisfy the "axioms of being conscious" proposed by Aleksander and Morton (2007). A human-level AI would benefit from using goal reasoning with the observations that support its artificial consciousness, to be more than a passive observer of its thoughts and environment. ...
What is the nature of goal reasoning needed for human-level artificial intelligence? This research position paper contends that to achieve human-level AI, a system architecture for human-level goal reasoning would benefit from a neuro-symbolic approach combining deep neural networks with a 'natural language of thought' and would be greatly handicapped if relying only on formal logic systems.
... • Reflective observation: observation of having observations. This definition was proposed in [1] (p. 136) and adapted from the "axioms of being conscious" proposed by Aleksander and Morton [10]. They used first-person, introspective statements to describe these elements of artificial consciousness. ...
What is the nature of knowledge representation needed for human-level artificial intelligence? This position paper contends that to achieve human-level AI, a system architecture for human-level knowledge representation would benefit from a neuro-symbolic approach combining deep neural networks with a ‘natural language of thought’ and would be greatly handicapped if relying only on formal logic systems.
... • Reflective observation: observation of having observations.This definition was proposed in[26] (p. 136), and adapted from the "axioms of being conscious" proposed by Aleksander and Morton[1]. They used first-person, introspective statements to describe these elements of artificial consciousness.Artificial consciousness does not confront or claim to solve Chalmers'[5] Hard Problem of consciousness: There is no claim that having artificial consciousness means an AI system would have the human subjective experience of consciousness. ...
Note: Rather than this paper, I (the author) recommend reading the more recent, published paper "On Achieving Human-Level Knowledge Representation by Developing a Natural Language of Thought", which contains additional discussions. -- PCJ 9/21/21 --
What is the nature of knowledge representation needed for human-level artificial intelligence? This position paper contends that to achieve human-level AI, a system architecture for human-level knowledge representation would benefit from a neuro-symbolic approach combining deep neural networks with a ‘natural language of thought’, and be greatly handicapped if relying only on formal logic systems.
... The thesis adapts the "axioms of being conscious" proposed by Aleksander and Morton (2007) for research on artificial consciousness. 1 The axioms of artificial consciousness can be implemented with symbolic processing. The human first-person subjective experience of consciousness is richer and more complex than these axioms, though we don't know precisely how to explain it ( §4.2.7). 2 Therefore the previous paper (Jackson 2018) took the Copyright © 2018, TalaMind LLC (www.talamind.com). ...
... Artificial subjective consciousness would be more complex than Aleksander and Morton's (2007) axioms for artificial consciousness. The conclusions of the previous paper (Jackson 2018) continue to hold for symbolic artificial consciousness which only implements these axioms. ...
... The thesis adapts the "axioms of being conscious" proposed by Aleksander and Morton (2007) for research on artificial consciousness. To claim a system achieves artificial consciousness it should demonstrate: ...
This white paper considers a counter-argument and caveat to the position of a previous paper (Jackson 2018) that a purely symbolic artificial consciousness is not equivalent to human consciousness and there need not be an ethical problem in switching off a purely symbolic artificial conscious-ness. The counter-argument is based on Newell and Simon’s Physical Symbol System Hypothesis, and leads to discussion of several topics, including whether a human-level AI can terminate its simulations of other minds without committing ‘mind-crimes’; whether human-level AI can be beneficial to humans without enslaving artificial minds; and some of the ethical issues for uploading human minds to computers. This paper concludes by summarizing reasons why the TalaMind approach (Jackson 2014) could be important for beneficial human-level AI and superintelligence, the openness of TalaMind to other research approaches, and topics for future research.
... The thesis adapts the "axioms of being conscious" proposed by Aleksander and Morton (2007) for research on artificial consciousness. To claim a system achieves artificial consciousness it should demonstrate: ...
This paper considers ethical, philosophical, and technical topics related to achieving beneficial human-level AI and superintelligence. Human-level AI need not be human-identical: The concept of self-preservation could be quite different for a human-level AI, and an AI system could be willing to sacrifice itself to save human life. Artificial consciousness need not be equivalent to human consciousness, and there need not be an ethical problem in switching off a purely symbolic artificial consciousness. The possibility of achieving superintelligence is discussed, including potential for 'conceptual gulfs' with humans, which may be bridged. Completeness conjectures are given for the 'TalaMind' approach to emulate human intelligence, and for the ability of human intelligence to understand the universe. The possibility and nature of strong vs. weak superintelligence are discussed. Two paths to superintelligence are described: The first path could be catastrophically harmful to humanity and life in general, perhaps leading to extinction events. The second path should improve our ability to achieve beneficial superintelligence. Human-level AI and superintelligence may be necessary for the survival and prosperity of humanity .
... consciousness, ala (Aleksander and Morton, 2007) -is open to different answers for the problem, as discussed in §4.2.7. ...
This TalaMind White Paper further discusses some topics in (Jackson 2017): reasoning with natural language syntax; interlinguas and generalized societies of mind; self-talk; artificial consciousness and the Hard Problem of consciousness.
... The thesis adapts the "axioms of being conscious" proposed by Aleksander and Morton (2007) for research on artificial consciousness. To claim a system achieves artificial consciousness it should demonstrate: ...
A comparison of Laird, Lebiere, and Rosenbloom's Standard Model of the Mind with the 'TalaMind' approach suggests some implications for computational structure and function of human-like minds, which may contribute to a community consensus about architectures of the mind.
... The logical approach has focused on systems which can be said to capture and deploy conscious a http://www.theswartzfoundation.org/banbury e.asp experience through containing neural networks with phenomenal states. These are phenomenal by virtue of the fact that they are world-representing through learning procedures known as iconic transfer and depiction [Aleksander and Morton, 2007]. Making these states available to the researcher for evaluation as base data is part of a mental stance methodology ]. ...
This paper aligns recent developments on information integration and consciousness with work on axiomatic systems. Axiomatic models are usually implemented as weightless neural structures. It is argued that weightless approaches lead to a logic-based interpretation of the information generated by the states of a network. Learning of phenomenal states is raised as a necessary additional requirement before one can link integration to consciousness.
... It is with this in mind that we should consider the axiomatic approach that forms one of the most well known top-down approach currently being developed in AC by Igor Aleksander and his colleagues [9][10][11]. Aleksander's approach begins with axioms, which in his case are introspectively derived features of consciousness which are held to be minimally necessary and (ideally) jointly sufficient features of any systems to which we would be likely to ascribe consciousness. ...
... These can be understood as aspects or dimensions of the way that the world is presented to us; they are presentational properties, aspects or dimensions of how the world is given to us. They are also deep properties in the sense that if any were absent from a putative experience we might be inclined to doubt that the putative experience were an experience at all. 10 One set of properties which has substantial value for the weak AC approach is Metzinger's list of six multi-level 'constraints' 11 (Table 1) (first developed in [18]). 12 Many of the core constraints in Metzinger's list can be seen as structural properties in just the sense introduced above: they are really dimensions of how any given experience is presented, or, to put this another way, they are conditions on a given state or process being considered an experience. ...
... An interesting attempt to this by respecting beeper-based sampling can be found in[20].10 We leave open for the moment whether the structural properties referred to herein are jointly necessary and/or sufficient for consciousness although it is likely that some of these properties will turn out to be more central than others and that some will turn out to be necessary for only certain types of conscious experiences.11 Metzinger calls these constraints, and develops them at multiple levels, we will call them properties for reasons that will become apparent. ...
Synthetic methods in science can aim at either instantiating a target phenomenon or simulating key mechanisms underlying that phenomenon; 'strong' and 'weak' approaches, respectively. While the former assumes a mature theory, the latter find its value in helping specify such theories. Here, we argue that artificial consciousness is best pursued as a (weak) means of theory development in consciousness science, and not as a (strong) axiom-driven project to build a conscious artefact. As with the other sciences of the artificial (intelligence, life), artificial consciousness can contribute by elaborating the possibilities and limitations of candidate mechanisms, transforming properties into mechanism-based criteria, and as a result potentially unifying apparently distinct properties via new mechanism-based concepts. We illustrate our arguments by discussing both axiom-driven and neurobiologically grounded approaches to artificial consciousness.