Figure 1 - uploaded by Igor Aleksander
Content may be subject to copyright.
illustrates the operation of the global workspace architecture, which comprises a set of specialist brain processes plus a global workspace. Information processing within the architecture consists of periods of competition interleaved with periods of broadcast . On the left of the figure, we see the set of specialist processes competing to gain access to the global workspace. Gaining access entails that the winning process (or coalition of processes) gets to broadcast its message, via the global
Source publication
Is synthetic phenomenology a valid concept? In approaching consciousness from a computational point of view, the question of phenomenology is not often explicitly addressed. In this paper we re- view the use of phenomenology as a philosophical and a cognitive construct in order to have a meaningful transfer of the concept into the computational dom...
Contexts in source publication
Context 1
...
Figure 1. The associative neuron group Each cross-point can be understood as one synapse and each horizontal line can be understood as one neuron with m synapses and one output signal so(i) . The purpose of each synapse is to associate the crossing signals s(i) and a(j) with each other. This is done via the synaptic weight w(i,j) . The synaptic weight value w(i,j) = 0 means that the signals s(i) and a(j) are not associated with each ...
Context 3
...
Figure 1: Control architecture for autonomous agents Motivations can be seen as homeostatic processes, which maintain a controlled physiological variable within a certain range. Homeostasis means maintaining a stable internal state (Berridge, 2 004). This internal state can be parameterized by several variables, which must be around an ideal level. When the value of these variables differs from the ideal one, an error signal occurs: the drive. These drives constitute urges to action based on bodily needs related to self-sufficiency and survival. External stimuli, both innate and learned, are also able to motivate and drive behaviour (Cañamero, 199 7 ). In order to model motivation, the hydraulic model of motivation described by Lorentz and Leyhausen in (Lorentz and Leyhausen, 19 73 ) has been used as an inspiration. This model is essentially a metaphor that suggests that motivational drive grows internally and operates a bit like pressure from a fluid reservoir that grows until it bursts through an outlet. Motivational stimuli from the external world act to open an outflow valve, releas- ing drive to be expressed in behaviour. In this model, internal drive strength interacts with external stimulus strength. If drive is low, then, a strong stimulus is needed to trigger motivated behaviour. If the drive is high, then, a mild stimulus is sufficient (Berridge, 2 004). Following this idea, the intensity of motivations ( M i ) is a combination of the intensity of the related drive ( D i ) and the related external stimuli ( w i ), as it is expressed in the following equa- tion: ...
Context 4
... mandatory to include phenomenal, that is depictive functions. Referring to the kernel architecture there is much work to be done on modes of interaction between the modules. Current work includes a clarification of the way the emotion module E controls the link between the phenomenological P and M modules and the non-phenomenological action module, A. (fig. 1). Illusions, ambiguous and ‘flipping’ figures are situations where phenomenology and reality part company. We are pursuing the mechanisms that, in the kernel architecture, would lead to the kind of perceptual instabilities associated with perceiving the Necker cube. This underlines the usefulness of synthetic phenomenology, as perceptual reversals may be measured in the depictive machinery and the conditions for such reversals studied. This is revealing of the interaction between phenomenal and non-phenomenal processes in the brain In GW, architectures it would be interesting to clarify the causes of phenomenology in the GW area which are not present in the supporting com- petitive processes. This is an introspective partitioning of five important aspects of being conscious 1. I feel as if I am at the focus of an out- there world. 2. I can recall and imagine experiences of feeling in an out there world. 3. My experiences in 2 are dictated by attention and attention is involved in recall. 4. I can imagine several ways of acting in the future. 5. I can evaluate emotionally ways of acting into the future in order to act in some purposive way. Igor Aleksander, The World In My Mind, My Mind In The World ’ Exeter: Imprint Academic, 2005. Igor Aleksander, Mecedes Lahnstein, Rabinder Lee: Will and Emotions: A Machine Model that Shuns Illusions, Proc AISB 2005 Symposium on New Generation Approaches to Machine Consciousness, 2005 Igor Aleksander, and Barry Dunmall: Axioms and Tests for the Presence of Minimal ...
Context 5
... language understanding is a hard problem that has not yet been solved satisfactorily and definitely not in any elegant way. Yet this is the exact problem that must be solved if meaningful inner speech is to be created in a machine. The au- thor’s “multimodal model of language” (Haikonen 2003) is one attempt towards natural use and understanding of language in a machine. Here an experiment relating to the implementation of this approach with associative neural networks is described. Spoken words are temporal sound patterns con- sisting of sequences of phonemes. The detection of words calls for the ability to capture and analyze sound patterns and transform the serial phoneme sequence into a parallel representation. Thereafter there are two possibilities for the word representation, namely the distributed representation and the single signal (grandmother) representation. In the distributed representation there can be one or more signals per phoneme or syllable, thus each word will be represented by a signal vector. In the single signal representation each word is represented by one signal only. The distributed representation method is more flexible and allows the use of inflection while the single signal method is easier to use in simple simulations. The author has used an associative neuron group (Haikonen 1999) as the basic processing unit for the distributed and single signal representations. The operation of the associative neuron group is explained here in simplified (but working) terms, which can be readily implemented with a computer program. The associative neuron group can be seen as a group of neurons that share common associative (synaptic) input signals. Thus their synapses form a kind of a matrix, figure ...
Similar publications
From a conceptual point of view, time is independent of its experience. That is: it can be given a conceptual description of time without any reference to terms related to the subjective consciousness of time. But concerning a phenomenology of that subjective experience of time, it can it be showed that such experience of time is, in itself, tempor...
The paper presents and discusses phenomenological facts about perceptual spaces and percepts, but ends with a few thoughts about possible causal explanations of such spaces. The overarching double-sided hypothesis claims that - from a phenomenological point of view - each individual animal has at each consciously perceived moment of time a sense-mo...
The Psycho-Acoustical Transitional (PAT) session is a completely defined setting from a mathematical-physical point of view that promotes large-scale functional connectivity between neural populations. Phenomenologically this phenomenon can be described as a modulation of the state of consciousness, inducing the unfolding of a clinical ecstatic sta...
I am aware that there’s an internal system organising the particular configurations of phenomenal field presenting in response to specific situations. One of such situation is looking into ‘vast spaces’.
It takes the brain some time to organise in response to a novel/unusual situation and it’s the unusual demands made on the visual system in tandem...
It is not yet well understood how we become conscious of the presence of other people as being other subjects in their own right. Developmental and phenomenological approaches are converging on a relational hypothesis: my perception of a “you” is primarily constituted by another subject’s attention being directed toward “me.” This is particularly t...
Citations
... Jackson (2019) uses the five-axiom definition of consciousness (Aleksander and Morton, 2007) in his TalaMind architecture of human-level AI. In contrast, my approach to consciousness will be based on the notion that displayed consciousness is consciousness. ...
... So, at least some aspects of consciousness are necessary for a system to demonstrate humanlevel intelligence. Jackson (2014, §3.7.6) discussed how a human-level AI could perform observations that would satisfy the "axioms of being conscious" proposed by Aleksander and Morton (2007). A human-level AI would benefit from using goal reasoning with the observations that support its artificial consciousness, to be more than a passive observer of its thoughts and environment. ...
What is the nature of goal reasoning needed for human-level artificial intelligence? This research position paper contends that to achieve human-level AI, a system architecture for human-level goal reasoning would benefit from a neuro-symbolic approach combining deep neural networks with a 'natural language of thought' and would be greatly handicapped if relying only on formal logic systems.
... • Reflective observation: observation of having observations. This definition was proposed in [1] (p. 136) and adapted from the "axioms of being conscious" proposed by Aleksander and Morton [10]. They used first-person, introspective statements to describe these elements of artificial consciousness. ...
What is the nature of knowledge representation needed for human-level artificial intelligence? This position paper contends that to achieve human-level AI, a system architecture for human-level knowledge representation would benefit from a neuro-symbolic approach combining deep neural networks with a ‘natural language of thought’ and would be greatly handicapped if relying only on formal logic systems.
... • Reflective observation: observation of having observations.This definition was proposed in[26] (p. 136), and adapted from the "axioms of being conscious" proposed by Aleksander and Morton[1]. They used first-person, introspective statements to describe these elements of artificial consciousness.Artificial consciousness does not confront or claim to solve Chalmers'[5] Hard Problem of consciousness: There is no claim that having artificial consciousness means an AI system would have the human subjective experience of consciousness. ...
Note: Rather than this paper, I (the author) recommend reading the more recent, published paper "On Achieving Human-Level Knowledge Representation by Developing a Natural Language of Thought", which contains additional discussions. -- PCJ 9/21/21 --
What is the nature of knowledge representation needed for human-level artificial intelligence? This position paper contends that to achieve human-level AI, a system architecture for human-level knowledge representation would benefit from a neuro-symbolic approach combining deep neural networks with a ‘natural language of thought’, and be greatly handicapped if relying only on formal logic systems.
... The thesis adapts the "axioms of being conscious" proposed by Aleksander and Morton (2007) for research on artificial consciousness. 1 The axioms of artificial consciousness can be implemented with symbolic processing. The human first-person subjective experience of consciousness is richer and more complex than these axioms, though we don't know precisely how to explain it ( §4.2.7). 2 Therefore the previous paper (Jackson 2018) took the Copyright © 2018, TalaMind LLC (www.talamind.com). ...
... Artificial subjective consciousness would be more complex than Aleksander and Morton's (2007) axioms for artificial consciousness. The conclusions of the previous paper (Jackson 2018) continue to hold for symbolic artificial consciousness which only implements these axioms. ...
... The thesis adapts the "axioms of being conscious" proposed by Aleksander and Morton (2007) for research on artificial consciousness. To claim a system achieves artificial consciousness it should demonstrate: ...
This white paper considers a counter-argument and caveat to the position of a previous paper (Jackson 2018) that a purely symbolic artificial consciousness is not equivalent to human consciousness and there need not be an ethical problem in switching off a purely symbolic artificial conscious-ness. The counter-argument is based on Newell and Simon’s Physical Symbol System Hypothesis, and leads to discussion of several topics, including whether a human-level AI can terminate its simulations of other minds without committing ‘mind-crimes’; whether human-level AI can be beneficial to humans without enslaving artificial minds; and some of the ethical issues for uploading human minds to computers. This paper concludes by summarizing reasons why the TalaMind approach (Jackson 2014) could be important for beneficial human-level AI and superintelligence, the openness of TalaMind to other research approaches, and topics for future research.
... The thesis adapts the "axioms of being conscious" proposed by Aleksander and Morton (2007) for research on artificial consciousness. To claim a system achieves artificial consciousness it should demonstrate: ...
This paper considers ethical, philosophical, and technical topics related to achieving beneficial human-level AI and superintelligence. Human-level AI need not be human-identical: The concept of self-preservation could be quite different for a human-level AI, and an AI system could be willing to sacrifice itself to save human life. Artificial consciousness need not be equivalent to human consciousness, and there need not be an ethical problem in switching off a purely symbolic artificial consciousness. The possibility of achieving superintelligence is discussed, including potential for 'conceptual gulfs' with humans, which may be bridged. Completeness conjectures are given for the 'TalaMind' approach to emulate human intelligence, and for the ability of human intelligence to understand the universe. The possibility and nature of strong vs. weak superintelligence are discussed. Two paths to superintelligence are described: The first path could be catastrophically harmful to humanity and life in general, perhaps leading to extinction events. The second path should improve our ability to achieve beneficial superintelligence. Human-level AI and superintelligence may be necessary for the survival and prosperity of humanity .
... consciousness, ala (Aleksander and Morton, 2007) -is open to different answers for the problem, as discussed in §4.2.7. ...
This TalaMind White Paper further discusses some topics in (Jackson 2017): reasoning with natural language syntax; interlinguas and generalized societies of mind; self-talk; artificial consciousness and the Hard Problem of consciousness.
... The thesis adapts the "axioms of being conscious" proposed by Aleksander and Morton (2007) for research on artificial consciousness. To claim a system achieves artificial consciousness it should demonstrate: ...
A comparison of Laird, Lebiere, and Rosenbloom's Standard Model of the Mind with the 'TalaMind' approach suggests some implications for computational structure and function of human-like minds, which may contribute to a community consensus about architectures of the mind.
... The logical approach has focused on systems which can be said to capture and deploy conscious a http://www.theswartzfoundation.org/banbury e.asp experience through containing neural networks with phenomenal states. These are phenomenal by virtue of the fact that they are world-representing through learning procedures known as iconic transfer and depiction [Aleksander and Morton, 2007]. Making these states available to the researcher for evaluation as base data is part of a mental stance methodology ]. ...
This paper aligns recent developments on information integration and consciousness with work on axiomatic systems. Axiomatic models are usually implemented as weightless neural structures. It is argued that weightless approaches lead to a logic-based interpretation of the information generated by the states of a network. Learning of phenomenal states is raised as a necessary additional requirement before one can link integration to consciousness.
... It is with this in mind that we should consider the axiomatic approach that forms one of the most well known top-down approach currently being developed in AC by Igor Aleksander and his colleagues [9][10][11]. Aleksander's approach begins with axioms, which in his case are introspectively derived features of consciousness which are held to be minimally necessary and (ideally) jointly sufficient features of any systems to which we would be likely to ascribe consciousness. ...
... These can be understood as aspects or dimensions of the way that the world is presented to us; they are presentational properties, aspects or dimensions of how the world is given to us. They are also deep properties in the sense that if any were absent from a putative experience we might be inclined to doubt that the putative experience were an experience at all. 10 One set of properties which has substantial value for the weak AC approach is Metzinger's list of six multi-level 'constraints' 11 (Table 1) (first developed in [18]). 12 Many of the core constraints in Metzinger's list can be seen as structural properties in just the sense introduced above: they are really dimensions of how any given experience is presented, or, to put this another way, they are conditions on a given state or process being considered an experience. ...
... An interesting attempt to this by respecting beeper-based sampling can be found in[20].10 We leave open for the moment whether the structural properties referred to herein are jointly necessary and/or sufficient for consciousness although it is likely that some of these properties will turn out to be more central than others and that some will turn out to be necessary for only certain types of conscious experiences.11 Metzinger calls these constraints, and develops them at multiple levels, we will call them properties for reasons that will become apparent. ...
Synthetic methods in science can aim at either instantiating a target phenomenon or simulating key mechanisms underlying that phenomenon; 'strong' and 'weak' approaches, respectively. While the former assumes a mature theory, the latter find its value in helping specify such theories. Here, we argue that artificial consciousness is best pursued as a (weak) means of theory development in consciousness science, and not as a (strong) axiom-driven project to build a conscious artefact. As with the other sciences of the artificial (intelligence, life), artificial consciousness can contribute by elaborating the possibilities and limitations of candidate mechanisms, transforming properties into mechanism-based criteria, and as a result potentially unifying apparently distinct properties via new mechanism-based concepts. We illustrate our arguments by discussing both axiom-driven and neurobiologically grounded approaches to artificial consciousness.