ArticlePDF Available

Axioms and Tests for the Presence of Minimal Consciousness in Agents I: Preamble

Authors:

Abstract

This paper relates to a formal statement of the mechanisms that are thought minimally necessary to underpin consciousness. This is expressed in the form of axioms. We deem this to be useful if there is ever to be clarity in answering questions about whether this or the other organism is or is not conscious. As usual, axioms are ways of making formal statements of intuitive beliefs and looking, again formally, at the consequences of such beliefs. The use of this style of exposition does not entail a claim to provide a mathematically rigorous formal deductive system. Conventional mathematical notation is used to achieve clarity, although this is elaborated with natural language in an attempt to reduce terseness. In our view, making the approach axiomatic is synonymous with building clear usable tests for consciousness and is therefore a central feature of the paper. The extended scope of this approach is to lay down some essential properties that should be considered when designing machines that could be said to be conscious. In the broader discussion about the nature of consciousness and its neurological mechanisms, it may seem to some that axiomatisation is premature and continues to beg many questions. However, the approach is meant to be open- ended so that others can build further axiomatic clarifications that address the very large number of questions which, in the search for a formal basis for consciousness, still remain to be answered. Of course, in discussions about consciousness many will also argue that the subject is not one that may ever be formally addressed by means of axioms. The view taken in this paper is 'let's try to do it and see how far it gets'.
Igor Aleksander and Barry Dunmall
Axioms and Tests for the Presence of
Minimal Consciousness in Agents
I: Preamble
This paper relates to a formal statement of the mechanisms that are thought mini-
mally necessary to underpin consciousness. This is expressed in the form of axi-
oms. We deem this to be useful if there is ever to be clarity in answering questions
about whether this or the other organism is or is not conscious. As usual, axioms
are ways of making formal statements of intuitive beliefs and looking, again for-
mally, at the consequences of such beliefs. The use of this style of exposition
does not entail a claim tro provide a mathematically rigorous formal deductive
system. Conventional mathematical notation is used to achieve clarity, although
this is elaborated with natural language in an attempt to reduce terseness. In our
view, making the approach axiomatic is synonymous with building clear usable
tests for consciousness and is therefore a central feature of the paper.
The extended scope of this approach is to lay down some essential properties
that should be considered when designing machines that could be said to be con-
scious. In the broader discussion about the nature of consciousness and its neuro-
logical mechanisms, it may seem to some that axiomatisation is premature and
continues to beg many questions. However, the approach is meant to be
open-ended so that others can build further axiomatic clarifications that address
the very large number of questions which, in the search for a formal basis for con-
sciousness, still remain to be answered. Of course, in discussions about con-
sciousness many will also argue that the subject is not one that may ever be
formally addressed by means of axioms. The view taken in this paper is ‘let’s try
to do it and see how far it gets’.
The outcome has been positive as it formalises an intuition about which
objects have mechanisms which could, potentially, make them conscious. So if
some object does not possess some of the major mechanisms indicated by the
minimal set it is unlikely that it could be considered to be conscious. For exam-
ple, a computer per se has no mechanisms that are indicated by the axioms and is
Journal of Consciousness Studies,10, No. 4–5, 2003, pp. ??–??
Correspondence: Igor Aleksander and Barry Dunmall, Intelligent and Interactive Systems Group,
Department of Electrical and Electronic Engineering, Imperial College, London SW7 2BT, U.K.
therefore not conscious. But a virtual machine hosted by an unconscious com-
puter that does obey the axioms might be potentially conscious of, say, a virtual
world for which the axioms apply. (This supports a notion voiced by Dennett
(1993, p. 210) where he argues that consciousness can be understood as a virtual
machine running on a parallel neural computer.) This is visited again at the end
of the paper.
II: Scope of the Paper
A set of axioms is proposed which aim to define a minimal set of necessary mate-
rial conditions for cognitive mechanisms that would support consciousness in an
agent. For the purpose of this paper ‘agent’ is used in the sense of an active object
that can sense its environment, have a purpose, plan according to that purpose
and then choose to act purposefully. An agent can be biological, non- biological
and, indeed, virtual (i.e. software running on a host computer). However, casting
this material in the domain of physically realisable robot-like machinery is
exploited as a hook on which to hang the more general axiomatic theory. The axi-
oms are about perception, imagination, contemplation, planning, acting and
emotions. These and other related terms are to understood in their usual sense in
cognitive neuroscience. Hypotheses and predictions that arise from these axioms
are examined in the light of findings in biological systems. It will be seen that
these indicate that, in order to support consciousness, mechanisms need to be cel-
lular and influenced not only by external perceptual events, but also by the
actions needed to gather and organise the perceptual sensations. The collation of
axioms and corollaries leads to a final hypothesis about what objects may be said
to be conscious. Also, as this theory introduces a variant form of functionalism,
the paper concludes with a consideration of some philosophical issues that might
be perturbed by this approach.
III: Discerning Consciousness in an Agent
Given are two very similar artificially built agents (A and B) that perform a set of
tasks competently. Say that someone claims that one of these (A) is conscious
while the other (B) is pre-programmed using conventional computing techniques
to cover all eventualities that are likely to be encountered in the performance of
the task. Central to our theory is a question we simply call Q.
Q: Is there a set of tests that will substantiate the claim that (A) performs
consciously while (B) does not?
To answer this with even a rudimentary working definition D of ‘being con-
scious’ makes the point.
D: Being conscious is defined by ‘having a private sense: of an “out-there”
world, of a self, of contemplative planning and of the determination of
whether, when and how to act’.
2 I. ALEKSANDER AND B. DUNMALL
(We are fully aware that this will not satisfy those who are concerned with the
results of mirror tests on animals which distinguish between conscious organ-
isms which are ‘just’ conscious and those that are self-conscious. We take a par-
allel view to Baars, 2001, that all mammals are conscious. For us a reaction to a
mirror is merely evidence of an extra ability to express what we have called hav-
ing a private sense of ‘self’.)
There appear to be three major approaches for tackling Q: behavioural tests,
tests based on an introspective knowledge of consciousness and tests based on a
knowledge of mechanism and function. We discard behavioural tests as it has
already been said that the two agents perform roughly the same activity. It is also
generally accepted that while consciousness may be attributed to agents with an
impressive behaviour it cannot be unequivocally inferred from such behaviour.
Then, by definition, tests cannot be derived from an introspective feeling of con-
sciousness, as they have to be third-person assessments. While it can be fully
accepted that we will never be able to know what it is like to be a conscious agent
(Nagel, 1974), the real question is what mechanism does the conscious agent
need to possess for it to have a sensation of its own presence and affordances for
action in its world.
In this paper we take the view that this inevitably implies that it is only through
an assessment and analysis of the mechanisms of the organism that we can con-
clude whether such mechanisms can support consciousness or not within that
organism.
IV: The Axiomatic Approach
The axioms of this theory are abstractions drawn from, D, the rudimentary defi-
nition of consciousness. In section IX this rudimentary definition is then
reviewed in the light of the axioms. As with many axiomatic theories, the
real-world problem acts as a stimulator for the origination of axioms which are
then examined as abstractions for the generation of hypotheses and theorems that
may be reflected back into the real-world domain. Almost incidentally these
abstractions may be implemented in the design of artificially conscious agents.
In particular, they are candidates for a series of tests directed at the conscious/
non-conscious agent question. We leave until the end of this paper setting this
approach against extant theories about consciousness. The details of the axiom-
atic approach are discussed next: we first set out the axioms without explanation
and then examine each in greater detail.
Let Abe an agent in a sensorily-accessible world S. For Ato be conscious of S
it is necessary that:
Axiom 1 (Depiction):
Ahas perceptual states that depict parts of S.
Axiom 2 (Imagination):
Ahas internal imaginational states that recall parts of Sor fabricate S-like
sensations.
AXIOMS AND TESTS FOR MINIMAL CONSCIOUSNESS 3
Axiom 3 (Attention):
Ais capable of selecting which parts of Sto depict or what to imagine.
Axiom 4 (Planning):
Ahas means of control over imaginational state sequences to plan actions.
Axiom 5 (Emotion):
Ahas additional affective states that evaluate planned actions and determine
the ensuing action .
In the rest of this paper we show the theoretical and functional characteristics
that are implied by these axioms. It should be said that Sis defined by the senses
available to the agent. In humans this adopts the broadest character: it refers not
only to the world that can be accessed through all external sensory modalities,
but also to physical bodily events that could be internally generated such as
heartburn, satiation, thirst, pain, and so on. In an agent equipped only with vision
and audition (say) Swould be only an auditory and visual world.
V: Depiction
Axiom 1: A has perceptual states that depict parts of S
We shall pursue the notion that Sis a conjunction of minimally perceivable
events (where an event is understood as a change in the world). This is familiar in
vision, for example, where it is possible to build a visual world out of dots as is
done in newspapers or television. We further assume that there is an ‘energy’
involved in any event that can be perceived. In general terms we define a mini-
mally perceivable event *Sjas a minimal element of the sensory world Swhich,
were it to have less energy, would not be perceived at all. That is, *Sjindicates
the minimum event, the smallest change in the world of which an organism can
become conscious. We then write Sas:
S=U
j(*Sj)
where Ujis an assembly (or, in logic, a conjunction) over all values of j, that is of
all the minimal events of which the organism could become conscious.
Now we assume that just one minimally perceivable element in Shas changed.
In order for this to be perceived, some functional support in Ahas to change. We
define this by means of a corollary related to Axiom 1.
COROLLARY 1.1: To be conscious of *Sj,it is necessary for Ato have a func-
tional support, say N(*Sj).
By functional support we mean that the state of some part of the machinery of A
must have changed. In other words, for an organism to perceive a minimal event
in the world requires a related physical change of some state variables of that
organism. In neurology N(*Sj) is the state of firing of neuronal groups. In the
abstract sense N(*Sj) is a unique set of values of a number of state variables
related to that event. For a different event, that is, a different j, a different set of
state variables would be affected.
4 I. ALEKSANDER AND B. DUNMALL
On notation: there could be conditions under which the state variables which
support N(*Sj) may fail to hold the world-related state. That is, the mechanisms
which are necessary for being conscious may not be operating in a conscious
mode. In this paper this applies to all cases where the notation N(x) is used.
Assuming a physical world of three spatial dimensions, jhas three spatial
co-ordinates (x,y,z). It is assumed that (x,y,z) = (0,0,0) is the ‘point of view’ of the
organism (or, in more philosophical terms, the location of the observing self),
giving all other world events ‘out there’ measurements at some distance from
this point of view. Clearly jneeds to be encoded in N(*Sj)in order to retain
‘out-thereness’ in the inner functional support and this leads to the next corollary.
COROLLARY 1.2: To achieve N(*Sj)it is necessary for Ato have a functional
support for the measurement of j. We call this measurement the ‘j-referent’.
In living organisms jis often provided by motor action such as the movement of
the eyes when foveating on a minimal visual event, or, in some animals, the abil-
ity to locate the source of an odour through head movement. In human audition,
phase differences and the shape of the ear locate sound. The general point made
in corollary 1.2 is that the encoding of the ‘where’ of a minimal event is as neces-
sary as the qualitative encoding of the sensory event itself in order to give this
encoding uniqueness. This leads to an important hypothesis.
HYPOTHESIS 1: It is sufficient for each state variable to be indexed by jfor the
grouping into N(*Sj)to be uniquely identified with *Sj, irrespective of the
physical location of the variables.
In other words the minimal event is fully encoded internally by the indexing of its
state variables irrespective of their physical location. This hypothesis has been
discussed elsewhere (Aleksander and Dunmall, 2000). It resolves the ‘binding’
problem by eliminating it. So in the primate visual system this suggests that neu-
rons in disparate areas such as V3, V4 and V5 may combine to support a single
minimal conscious event (e.g. the perception of a small red diamond moving in
some direction) with no binding other than the j-indexing of specific neurons.
This also leads to a prediction.
PREDICTION 1.1: j-indexed neurons will be found in living conscious
organisms.
This prediction should be seen in retrospect as motor-indexed neurons have actu-
ally been found in primates since 1989 starting with the work of Galletti and
Battaglini (1989), followed by many others. In fact it is quite remarkable how
widely spread such discoveries are in the brain: gaze indexing in V3, arm motor
indexing in V6, head position indexing in the parietal cortex, and so on.
Given all this it is now possible to define N(S),adepiction of S, as the func-
tional support required to be conscious of S, where
N(S)=UjN(*Sj)
This suggests that all the values of jcan be gathered in parallel which somewhat
contradicts the practicality of generating values of jthrough muscular action
AXIOMS AND TESTS FOR MINIMAL CONSCIOUSNESS 5
(such as eye movement). So in living systems, during a perceptual act, internally,
jhas a sequential nature j(t), which makes it necessary for N(*Sj(t))to have persis-
tence in time (see ‘Attention’).
In summary, the importance of Axiom 1 is that it suggests a method (the depic-
tion) of internal support for the sensation of an ‘out-there’ world. It is noted that
this strongly suggests a cellularity that determines minimally perceivable events.
The support indexes cells to encode the ‘out-there’ coordinates of minimal
events and needs to be compositional through cell interaction, to match the com-
position of non-minimal events in the world from minimal ones.
VI: Imagination
Axiom 2: A has internal imaginational states that recall parts of S or fabricate
S-like sensations
Axiom 2 comes from the fact that any organism that is conscious must be con-
scious of imagined as well as perceived events. Here follow some implied mech-
anisms that support imagination.
COROLLARY 2.1: The state of a set of cells which supports conscious sensa-
tion has necessarily at least two components
N(C)=N(S),N(I)
where N(I)isaj-referenced set of distributed cell states of a dynamic system to
which significant states of N(S) are transferred (see corollary 2.2) becoming
attractors in this set of variables.
Attractors in N(I), in psychological language, would be called recalled memo-
ries of past experience, and N(I) in neurological language is a neural memory
site. So the process of acquiring experience that may be ‘replayed internally’ as
indicated in the above corollary is a sequence of collective cell states with input
N(S) and state N(I). It is suggested that if both are j-indexed they will automati-
cally overlap in consciousness. The next corollary suggests one way in which
this transfer may be achieved.
COROLLARY 2.2: One possible way for N(I) to inherit the j-referent is by
transfers from N(S)toN(I) through Iconic learning
Iconic learning (Aleksander, 1996, p. 122) in a cellular or neural system is the
process whereby the states of a network copy the pattern on the input variables of
the machine while learning is taking place. Learning is the process whereby each
cell associates with its own current state with the inputs derived from the outputs
of other cells in the system and external inputs.
We note that what we have described is a system of cells each of which deter-
mines its state from external input and the states of other cells. In automata the-
ory this is called a state machine.
6 I. ALEKSANDER AND B. DUNMALL
COROLLARY 2.3: Time and sequentiality are the properties of any state
machine, therefore N(I) can be the imagination of a sequence.
COROLLARY 2.4: N(S) is not the only input to the state machine. Other inputs
could be internal emotions E(see ‘Emotion’), internal sensations H(such as
pain in living organisms) or verbal input V(in human systems). In very gen-
eral terms then the state equation for Imagination becomes
N(I)'=N(I)xN(S)xExHxVx…
where N(I)'is the next internal state to N(I) and ‘x’ indicates a Cartesian product,
that is a combined influence of E,H,V
This means that imagination sequences may be initiated from a variety of
inputs. For example, a sensory state that indicates that a problem has to be solved
(a ball is rolling off a table and has to be caught), a state of hunger that needs to be
satisfied, or a verbal request has to be understood and acted upon.
In summary, the concept of imagination that is thought to be essential for an
agent to be conscious, has been linked to the necessity for a state-machine struc-
ture whose states inherit the ‘out-thereness’ afforded by j-referencing at its con-
trolling inputs.
VII: Attention
Axiom 3: A is capable of selecting which parts of S to depict or what to
imagine.
Attention is necessary whenever the build-up of N(S) from Scannot be done in
parallel. It addresses the question of what generates the j-referent. It is a search
process made necessary by any restriction in the degree of parallelism which is
available at the input of a system. It is initially driven by agent actions. These
actions create the j-referents that enable N(S) to be internally reconstituted. It
applies to N(I) as well as a way of bringing detail into consciousness. That is,
there is a N(*Ij) extracted from N(I) which of necessity parallels the N(*Sj)
related to N(S).
COROLLARY 3.1: Attention is a search process made necessary by any restric-
tion in the capacity of a sensory input channel. It is initially driven by agent
actions to make depiction as complete as possible. These actions create the
j-referents that enable N(S) to be internally reconstituted.
The best known example of external attention — i.e. applied to N(S)—iseye
movement and foveation. This draws attention to the fact that j-referencing can
take place for a group of minimal events simultaneously (i.e. whatever is
foveated for one eye position).
COROLLARY 3.2: Attention is a search process that can be context independ-
ent or context dependent on Sor N(S) or N(I).
COROLLARY 3.3: Attention with respect to N(I), attention is the inverse pro-
cess to depictive construction.
AXIOMS AND TESTS FOR MINIMAL CONSCIOUSNESS 7
Again in primate vision, these three modes can be observed. The neurological
system that involves the frontal eye fields and the superior colliculus is the evi-
dence that all three modes indicated by corollaries 3.1, 3.2, 3.3 are possible for
perception. This is the system which acts directly on eye muscles, receives input
directly from the retina as well as higher visual areas. It moves the eye to areas of
high change in the retinal image, or, when it receives inputs from higher areas so
it can move the eyes to where important information is predicted to occur.
HYPOTHESIS 2: Inner attention uses the j-referent mechanisms that generate
N(S) to select parts of N(I).
In other words, selection mechanisms which drive action in external attention
(such as the activation of gaze locked cells in vision) are at work when attending
to imagined sensations. They (i.e. the j-referents) may be a crucial indexing set of
signals which position elements of N(I) in the world even when the world is not
being perceived. Actuators may or may not be stimulated. If they are not, a clear
vetoing mechanism needs to be found.
VIII: Planning
Axiom 4: A has means of control over imaginational state sequences to plan
actions.
So far N(I) has been described as a passive recall process, possibly indexed by
some sort of need. Here we see the same state machine mechanisms working
against needs that eventually lead to motion or other forms of action on the part
of the agent.
DEFINITION 4.1: Planning is the traversing of trajectories in N(I) under the
input control of a need, or a desire, or no input at all, but in all cases leading
to the selection of one among several possible outward actions.
COROLLARY 4.1: During exploration, action results in j-referents, to have
planning, it is necessary for the j-referents linked to state trajectories in N(I)
also to link to potential action sequences.
The implication of this definition and corollary is that a trajectory in N(I)isa
depiction of ‘if I do Y will it result in X ?’ where X is a need or a desire and Y is a
set of j-referents on the input of the state machine with the N(I) states. However
this allows that X need not be fully defined, leading to something that could be
called internal brainstorming. The importance of the role of the j-referent is
stressed, as 4.1 closes the loop between exploratory mechanisms, the accumula-
tion of experience, and the use of this experience in generating actions. The need
for ‘gating’ (see ‘Emotions’) is also highlighted, in the sense that plans need to be
‘approved’ before they lead to action, as will be seen in the next section. This
gating has been postulated in primate neurology in cortico-thalamic loops that
are ‘vetoed’ by the action of the anterior cingulate areas (4).
In summary, planning is the primary task for the N(I) state machine, and good
planning is a property which aids evolutionary survival.
8 I. ALEKSANDER AND B. DUNMALL
IX: Emotion
Axiom 5: A has additional affective states that evaluate planned actions and
determine the ensuing action .
As agreed by many commentators (e.g. Freeman, 1999), emotion is not just a
passive change of internal state in response to perceptual events. It is closely
linked to the perception–planning–action link.
Plans need to be evaluated, at times very quickly. This implies a mapping from
a prediction in N(I). Indeed, to recognise that emotion may be constantly active
in consciousness, the following corollary extends corollary 2.1 as follows.
COROLLARY 5.1: The total support of consciousness N(C) contains an emo-
tional element N(E) where this is an evaluation of the states of N(I). That is,
N(C)=N(S), N(I), N(E)
Where N(E)=feN(I)
febeing an evaluative function operating on N(I).
We note that femay be either inborn (say, fear and pleasure) or more refined
through being acquired through experience to a greater or lesser extent (envy,
appreciation of beauty).
X: The Consciousness Hypothesis and its Implication
The rudimentary definition D given at the beginning of this paper, uses the words
‘having a private sense of …’. The five axioms and their corollaries have sug-
gested the necessity of specific mechanisms without which an agent’s conscious-
ness would be incomplete or not existing at all. The axioms all present necessary
conditions while it could be argued that they are not sufficient. The key lack of
sufficiency lies (as in Chalmers’ argument, for example) that an additional fac-
tor, say factor F, is required to map from N(C) into C, the sensation of being con-
scious. In our terminology:
C=F(N(C)),
where Fis a mapping which (some argue, see XI below) cannot be defined using
known science.
In contrast with this, the axiomatic approach suggests the following
hypothesis
HYPOTHESIS 3: For every event in Cthere is a corresponding event in N(C).
That is, in the formulation
C=F(N(C)),
the mapping Fis one to one making Ca complete description of N(C) and N(C)a
complete description of C.
The implication of this is that in order to have a complete theory of con-
sciousness Cit is sufficient to have a complete theory of N(C).
AXIOMS AND TESTS FOR MINIMAL CONSCIOUSNESS 9
XI: A Philosophical Note
At first sight the 1:1 relationship in hypothesis 3 appears to be classically
reductionist. However we have taken care not to eliminate F(some fixed rela-
tionship) in this hypothesis by recognising the difference between Cand N(C).
We are merely arguing that any theoretical prediction of an event at the
subjective level must be formulated at the functional level. Similarly if an
explanation is required of a reported event at the subjective level a science of
consciousness will find it at the functional level. Therefore a ‘science of con-
sciousness’ if properly conducted at the functional level will also be a science of
the subjective nature of consciousness. In other words, we are not arguing that
the sensation of pain is the same as saying that some specific neurons are firing.
What we are saying is that the only science that concerns pain is the science that
relates to its neural aspect.
We also avoid the expansionist viewpoint of Chalmers (1996) in the sense that
we are not saying that N(C)causes or generates some prime event in some space
of fundamental, non-physical subjective Clevel. We accept these subjective
events but do not accept that they require any science which lies over and above
that of the functional level. We therefore accept the special nature of the first per-
son sensation, but argue not that it requires an as yet undiscovered science or link
to the functional, but that it is merely a reflection of the functional. In this we
present ideas that are similar to those of Velmans’ (2000) ‘reflexivity’ of con-
sciousness. To provide an analogy that plays on the ‘reflection’ comment, con-
sider the virtual world of reflections in mirrors which only requires the physics of
the tangible physical world to explain any event that may be observed in the
reflected virtual world.
With respect to physiological theories, we stand far aside from the sequence of
Penrose’s (1994) and Hameroff’s (1994) attacks on computational theories by
requiring a quantum source to explain the coexistence of mental states, as when
contemplating a problem and visualising its solution simultaneously. We have
given a counterexample of this in Aleksander and Dunmall (2000), where an
imagined state can coexist with a perceptual state through the concept of the
j-referent of corollary 2.1. We would argue that the axiomatic approach in this
paper is thoroughly computational.
We do, however, side with the physiological theories of Crick and Koch
(1995) in the sense that all sensation has a neural correlate. We differ slightly
only in not seeing binding between neural areas as being crucial for superimposi-
tions of neural activities in different brain areas to create a coherent sensation as
indicated in Hypothesis 1, which, also, is based on the concept of the j-referent of
corollary 2.1.
XII: Conclusion: Consciousness in Machines
In the implication that follows Hypothesis 3, it is necessary to ask what is meant
by ‘a complete theory of N(C)’ ? The answer relates back to the five axioms
which suggest that it is necessary for certain mechanisms to be present in an
10 I. ALEKSANDER AND B. DUNMALL
agent to create a form (albeit minimal) of consciousness. These mechanisms are
recognisable through an analysis of the structural (anatomical in living organ-
isms) and functional (neurological in living organisms) characteristics of a given
agent. Such analysis is couched in terms that are computationally realisable. This
implies that it is possible to transfer the understood mechanisms into computa-
tional agents and thus create systems with a ‘machine’ consciousness.
Of course, any theories of N(C), would not be complete at any point in time.
Indeed ‘completeness’ may be something that can only be approached asymptot-
ically. However, whatever the state of understanding of N(C) is at any point in
time, the contention of this paper is that it is machine testable by methods such as
computational neuromodelling.
The primary understanding of N(C) is likely to come from neurology. An
advantage of transferring this to machines, is that hypotheses in neurology can be
tested by computation. This was the procedure used in our work on the mecha-
nisms of visual awareness (Aleksander and Dunmall, 2000) and is currently
being applied to the discovery of the cause of visual memory deficits among suf-
ferers of Parkinson’s disease. Also elements of N(C) may be directly applied to
machinery as is currently being done to enable a mobile robot with vision to
develop a sense of visual awareness.
But in addition to what is known in neurology, the transfer of mechanisms into
computation allows knowledge to be added to theories of N(C). A most impor-
tant feature of a neuromodel is that the content of the conscious experience can
be decoded and displayed on the screen (due to Axiom 1 — Depiction). This is a
function of the structural knowledge that one has of the connections and coding
in the machine. It is unlikely that with the current state of brain scanning systems
such a feat could be done for living organisms. It may become, however, become
possible in the future at which point the idea that Fis more than a 1:1 relationship
will collapse.
In the preamble to the paper we drew attention to the fact that some commenta-
tors on consciousness will see the C, which is at the focus of the axioms, as being
something different from their concept of consciousness. They argue that this
therefore begs the question of whether the mechanisms in this paper are relevant
to whatever it is that they call consciousness. However, they would go on to agree
that a rock or a computer is not conscious while a human being is. A rock or an
unprogrammed computer has none of the axiomatic mechanisms while a bat or a
human being might have. We argue therefore that some correlation exists
between the presence of consciousness in objects that are reasonably well agreed
to be conscious and the existence of the axiomatic mechanisms. However, as
mentioned in our preamble, were there to exist a virtual machine in the host com-
puter, and if the presence of the five axiomatic mechanisms could be demon-
strated, in what sense could the virtual machine be said to be conscious? The
axioms would lead us to believe that the computer could either be conscious of a
virtual world resident in the computer or of a real world were the computer be the
‘brain’ of a robot. An example of a non-conscious object may be found in some
recently built robots in our laboratory (which sound like not a great
AXIOMS AND TESTS FOR MINIMAL CONSCIOUSNESS 11
achievement). At best one can identify in them the mechanisms that accord to
just the first three axioms. Are they conscious? Well, our approach would say
that they are not, but given a development or evolution of the remaining two axi-
omatic mechanisms, what arguments could be used to deny them consciousness?
So, here too, we argue that the axiomatic approach issues a challenge to its
detractors to provide at least a logical if not axiomatic argument as to why con-
sciousness is something unrelated to the axioms in this paper.
References
Aleksander, Igor (1996), Impossible Minds: My Neurons, My Consciousness (London: Imperial
College Press).
Aleksander, Igor and Dunmall, Barry (2000), ‘An extention to the hypothesis of the asynchrony of
visual consciousness’, Proc R Soc Lond B, 267, pp. 197–200.
Baars, Bernard (2001), ‘Surrounded by consciousness’, Paper presented at conference Towards a
Science of Consiousness, Skovde, Sweden.
Chalmers, David J. (1996), The Conscious Mind: In Search of a Fundamental Theory (Oxford
Universiy Press).
Dennett, Daniel (1993), Consciousness Explained (London: Penguin).
Crick, Francis and Koch, Christopher (1995) ‘Are we aware of neural activity in primary visual
cortex?’, Nature,375, pp 121–33.
Freeman, Walter J. (1999), How Brains Make Up Their Minds (London: Weidenfeld and Nicolson).
Galletti, Carlo and Battaglini Paolo (1989), ‘Gaze-dependent visual neurons in Area V3A of mon-
key prestriate cortex’, Journal of Neuroscience,6, pp. 1112–25.
Hameroff, Stuart (1994), ‘Quantum coherence in microtubules: A neural basis for emergent con-
sciousness?’ Journal of Consciousness Studies,1(1), pp. 91–118.
Nagel, Thomas (1974), ‘What is it like to be a bat?’ Philosophical Revue,83, pp. 435–50.
Penrose, Roger (1994), Shadows of the Mind: A Search for the Missing Theory of Consciousness
(Oxford: Oxford University Press).
Velmans, Max (2000), Understanding Consciousness (London: Routledge).
12 I. ALEKSANDER AND B. DUNMALL
... In the previous published articles [6, 7, 8] the author proposed the definitions of main concepts of Artificial General Intelligence: AGI-Individual Type, AGI-Collective type, AGI-Consciousness, AGI-Thought, AGI-Emotions, AGI-Knowledge, and others. The author approach follows named "axiomatic approach" proposed in [9,10] and belong to embodied cognition [11] as one of three directions of Cognitive Science: computationalism, connectionism, embodied cognition. ...
... From author's point of view, the Standard Model can be enlarged with the approaches of Embodied Cognition [11] and Axiomatic Approach in AI [6][7][8][9][10]. In this article the author will propose his view how it can be done. ...
... Aleksander and Dunmall [9,10] have developed an Axiomatic Approach to machine consciousness based around five axioms, which they believe are minimally necessary for consciousness: ...
Article
Full-text available
For the last half of the century there were proposed and modeled several dozen cognitive architectures as the models of mind. As one of the results of this Standard Model of the Mind was proposed and discussed in 2017 (now known also as "Common Model of Cognition"). It accumulated lessons learned in one structure. In the articles published
... Axiomatic Approach. Aleksander and Dunmall [9,10] have developed an approach to machine consciousness based around five axioms, which they believe are minimally necessary for consciousness: ...
... The Embodied Cognition as one of three directions of Cognitive Science (with other ones: Computationalism and Connectivism), which was developing from 90th of the last century [11], "Axiomatic Approach" that had been proposed by Alexander in the beginning of this century [9,10], and the development of "axiomatic approach" made by the author in the articles mentioned here [6][7][8] deliver the possibility to enlarge the Standard Model of Cognition with new architectural details as it was represented below. ...
... In the article the author proposed the development of the Standard Model of Mind on the base of Embodied Cognition of Cognitive Science and "Axiomatic Approach" that had been proposed by Alexander in the beginning of this century [9,10], and its development made by the author in the articles mentioned here [6][7][8]. There was represented new architectural details for Standard Model of Mind that can be used in further modeling experiments. ...
Article
Full-text available
For the last half of the century there were proposed and modeled several dozen cognitive architectures as the models of mind. As one of the results of this Standard Model of the Mind was proposed and discussed in 2017. It accumulated lessons learned in one structure. In the articles published in 2016-2018, the author formulated main definitions of the concepts of Artificial General Intelligence (AGI): AGI-Individual Type, AGI-Collective Type, AGI-Consciousness, AGI-Thought, AGI-Knowledge, AGI-Emotions. The author’s approach belongs to the direction Embodied Cognition in Cognitive Science and is following “Axiomatic Approach” in Artificial Intelligence. The definitions proposed by the author are of constructive type from mathematical point of view and can be modeled by the existing software & hardware tools and methods. In this article the author is proposing AGI-Agent cognitive architecture AGICA as detailed modification of Standard Model of the Mind. It can be used in the development of universal operating system for AGI-robots. Keywords: artificial intelligence, artificial general intelligence, cognitive architecture, operating system.
... When we look at consciousness, it is a highly intricate and complicated process. In their first principle approach, scientists and engineers have generalised and abstracted the nature of minimal consciousness state; further proposed a set of axioms and tests for minimally conscious state embodied by an artefact [62]. Their approach heavily assumes that minimal conscious states are computational and can be replicated in an artefact. ...
Article
Full-text available
The question of consciousness has puzzledmany great philosophersfrom time immemorial and rightly so because of the fact that our brain is made up of flesh and blood and yet produces such vivid subjective experience s of reality. The topic of consciousness has been confined to the field of philosophy till recently; later, neuroscientists, psychologists and computer scientists have joined this quest to resolve the enigma of consciousness. Amongst many such varying approaches to understanding consciousness, there is a particular perspective called “Machine Consciousness” embraced by many prominent scientists and engineers working on reproducing consciousness in a machine. This paper is an exhaustive review of the advancements in machine consciousness and the underlying theories and cognitive architectures. This paper also covers criticisms and shortfalls of relevant approaches and elaborates on machine consciousness’s relationship with other fields.
... Both descriptors suffer from an ongoing debate regarding their actual existence or explanatory usefulness. This is primarily a result of impossibility to design a definitive test to measure or even detect said properties, despite numerous attempts [48][49][50] or to show that theories associated with them are somehow falsifiable. Intuitively we can speculate that consciousness, and maybe free will, are not binary properties but rather continuous and emergent abilities commensurate with a degree of general intelligence possessed by the system or some other property we shall term "mindness". ...
Article
Full-text available
This paper explores the landscape of potential mind architectures by initially conceptualizing all minds as software. Through rigorous analysis, we establish intriguing properties of this intellectual space, including its infinite scope, variable dimensions of complexity, and representational intricacies. We then provide an extensive review of existing taxonomies for mind design. Building on this foundation, the paper introduces 'Intellectology' as a new field dedicated to the systematic study of diverse forms of intelligence. A compendium of open research questions aimed at steering future inquiry in this nascent discipline is also presented.
... As seen in Fig. 7, Aleksander and Dunmall [2003] proposed¯ve principles that are required for a system to be minimally conscious. The¯ve features necessary for an agent to be minimally conscious of its environment are: (1) Depiction, (2) Imagination, (3) Attention, (4) Planning and (5) Emotion. ...
Article
Full-text available
This paper envisions the possibility of a Conscious Aircraft: an aircraft of the future with features of consciousness. To serve this purpose, three main fields are examined: philosophy, cognitive neuroscience, and Artificial Intelligence (AI). While philosophy deals with the concept of what is consciousness, cognitive neuroscience studies the relationship of the brain with consciousness, contributing toward the biomimicry of consciousness in an aircraft. The field of AI leads into machine consciousness. The paper discusses several theories from these fields and derives outcomes suitable for the development of a Conscious Aircraft, some of which include the capability of developing “world-models”, learning about self and others, and the prerequisites of autonomy, selfhood, and emotions. Taking these cues, the paper focuses on the latest developments and the standards guiding the field of autonomous systems, and suggests that the future of autonomous systems depends on its transition toward consciousness. Finally, inspired by the theories suggesting the levels of consciousness, guided by the Theory of Mind, and building upon state-of-the-art aircraft with autonomous systems, this paper suggests the development of a Conscious Aircraft in three stages: Conscious Aircraft with (1) System-awareness, (2) Self-awareness, and (3) Fleet-awareness, from the perspectives of health management, maintenance, and sustainment.
... The best we can say is that some of the functions that have been proposed to be linked to consciousness in the brain are also likely to be linked to intelligence. For example, Aleksander and Dunmall [2003] claim that depiction, imagination, attention, planning and emotion are minimally necessary to support consciousness. These functional properties are clearly connected with intelligencefor example, we need imagination to do IQ tasks, such as Ravens' matrices, and planning is related to predictive intelligence and goal achievement. ...
... We believe that the research results presented in this review are an attempt to realize some of these features. Another paper [10], instead of discussing the attributes of consciousness, discusses five 96 YAMADA et al. ...
Article
Thus far, we have experienced three artificial intelligence (AI) booms. In the third one, we succeeded in developing AI that partially surpassed human capabilities. However, we are yet to develop AI that, like humans, can perform a series of cognitive processes. Consciousness built into devices is called machine consciousness. Related research has been conducted from two perspectives: studying machine consciousness as a tool to elucidate human consciousness and achieving the technological goal of furthering AI research with conscious AI. Herein, we survey the research conducted on machine consciousness from the second perspective. For AI to attain machine consciousness, its implementation must be evaluated. Therefore, we only surveyed attempts to implement consciousness as systems on devices. We collected research results in chronological order and found no breakthroughs that could deliver machine consciousness soon. Moreover, there is no method to evaluate whether an implemented machine consciousness system possesses consciousness, thus making it difficult to confirm the certainty of the implementation. This field of research is a new frontier. It is an exciting field with many discoveries expected in the future.
Chapter
A theory of what it is to be an agent acting for reasons is given. This picture could apply to non-linguistic creatures, but having language allows us to bring together previously isolated spaces of reasons. The importance of circular causal processes in evolutionary, developmental, and cognitive processes is foregrounded. These lead, in systems subject to selection, to the emergence of new kinds. The multi-level processes of evolution, natural and cultural, are described. This gives us the flexibility to adapt to niches without innately specified behaviours, and relies on the biological evolution of flexible cognitive machinery. Learnt and innate expectations regarding the outcomes of actions are part of the self-sustaining causal loop in the dynamic interaction between mind and world, contrary to the traditional ‘sense, plan, act’ cycle. The picture of humankind as physically embodied and socially embedded creatures is given in some detail, with empirical support. For humans, the physical and social context should be considered as a ‘proper part’ of cognition, rather than just a place we leave our tools, or a source of inputs. Varieties of externalism in the literature are discussed and evaluated.
Chapter
Virtual Machine Functionalism is defended as an account of the relation between the mental and the physical not vulnerable to reductionist arguments. Realism about conscious experience is defended in a way that accords with physicalism. It is shown how this account avoids the pitfalls of other attempts at bridging the difficulties of the material/experiential divide, as the qualitative aspects of experience play a causal role in the workings of the virtual machines our minds are made of. This is rendered more plausible through a deflationary account of consciousness, one that does not start with a question-begging assumption of an unbridgeable gap. Panpsychist and illusionist theories of consciousness are reviewed. The composition problem is outlined, and resolved through the considering split-brain subjects and sensory substitution. However, it is argued this lends support to a deflationary, relational, and emergentist theory of conscious subjecthood. It is shown that experience can be constituted by processes implemented in certain kinds of virtual machinery, and that these processes can form larger wholes if they are connected in the right kinds of way, which will require sharing a developmental role in coordinating the actions of the subject they compose.
Article
All the deep philosophical questions, starts the joke, were asked by the classical Greeks, and everything since then has been footnotes and comments in the margins, finishes the punchline—although Graeber [1] might have argued that it was the military-coinage-slavery complex that fostered a flowering of philosophical thought in three regions (the Mediterranean, India, and China) contemporaneously, and all else has been footnotes.
Article
Full-text available
Extracellular recordings from single neurons of the prestriate area V3A were carried out in awake, behaving monkeys, to test the influence of the direction of gaze on cellular activity. The responsiveness to visual stimulation of about half of the studied neurons (88/187) was influenced by the animal's direction of gaze: physically identical visual stimuli delivered to identical retinotopic positions (on the receptive field) evoked different responses, depending upon the direction of gaze. Control experiments discount the possibility that the observed phenomenon was due to changes in visual background or in depth, depending on the direction in which the animal was looking. The gaze effect modulated cell excitability with different strengths for different gaze directions. The majority of these neurons were more responsive when the animal looked contralaterally with respect to the hemisphere they were recorded from. Gaze-dependent neurons seem to be segregated in restricted cortical regions, within area V3A, without mixing with non-gaze-dependent cells of the same cortical area. The most reliable differences between V3A gaze-dependent neurons and the same type of cells previously described in area 7a (Andersen and Mountcastle, 1983) concern the small receptive field size, the laterality of gaze effect, and the lack of straight-ahead facilitated or inhibited neurons in area V3A. Since the present results show that V3A gaze-dependent neurons combine information about the position of the eye in the orbit with that of a restricted retinal locus (their receptive field), we suggest that they might directly encode spatial locations of the animal's field of view in a head frame of reference. These cells might be involved in the construction of an internal map of the visual environment in which the topographical position of the objects reflects their objective position in space instead of reflecting the retinotopic position of their images. Such an objective map of the visual world might allow the stability of visual perception despite eye movement.
Article
It is usually assumed that people are visually aware of at least some of the neuronal activity in the primary visual area, V1, of the neocortex. But the neuroanatomy of the macaque monkey suggests that, although primates may be aware of neural activity in other visual cortical areas, they are not directly aware of that in area V1. There is some psychophysical evidence in humans that supports this hypothesis.
Article
The paper begins with a general introduction to the nature of human consciousness and outlines several different philosophical approaches. A critique of traditional reductionist and dualist positions is offered and it is suggested that consciousness should be viewed as an emergent property of physical systems. However, although consciousness has its origin in distributed brain processes it has macroscopic properties - most notably the `unitary sense of self', non-deterministic free will, and non-algorithmic `intuitive' processing - which can best be described by quantum-mechanical principles.