Igor Aleksander and Barry Dunmall
Axioms and Tests for the Presence of
Minimal Consciousness in Agents
This paper relates to a formal statement of the mechanisms that are thought mini-
mally necessary to underpin consciousness. This is expressed in the form of axi-
oms. We deem this to be useful if there is ever to be clarity in answering questions
about whether this or the other organism is or is not conscious. As usual, axioms
are ways of making formal statements of intuitive beliefs and looking, again for-
mally, at the consequences of such beliefs. The use of this style of exposition
does not entail a claim tro provide a mathematically rigorous formal deductive
system. Conventional mathematical notation is used to achieve clarity, although
this is elaborated with natural language in an attempt to reduce terseness. In our
view, making the approach axiomatic is synonymous with building clear usable
tests for consciousness and is therefore a central feature of the paper.
The extended scope of this approach is to lay down some essential properties
that should be considered when designing machines that could be said to be con-
scious. In the broader discussion about the nature of consciousness and its neuro-
logical mechanisms, it may seem to some that axiomatisation is premature and
continues to beg many questions. However, the approach is meant to be
open-ended so that others can build further axiomatic clarifications that address
the very large number of questions which, in the search for a formal basis for con-
sciousness, still remain to be answered. Of course, in discussions about con-
sciousness many will also argue that the subject is not one that may ever be
formally addressed by means of axioms. The view taken in this paper is ‘let’s try
to do it and see how far it gets’.
The outcome has been positive as it formalises an intuition about which
objects have mechanisms which could, potentially, make them conscious. So if
some object does not possess some of the major mechanisms indicated by the
minimal set it is unlikely that it could be considered to be conscious. For exam-
ple, a computer per se has no mechanisms that are indicated by the axioms and is
Journal of Consciousness Studies,10, No. 4–5, 2003, pp. ??–??
Correspondence: Igor Aleksander and Barry Dunmall, Intelligent and Interactive Systems Group,
Department of Electrical and Electronic Engineering, Imperial College, London SW7 2BT, U.K.
therefore not conscious. But a virtual machine hosted by an unconscious com-
puter that does obey the axioms might be potentially conscious of, say, a virtual
world for which the axioms apply. (This supports a notion voiced by Dennett
(1993, p. 210) where he argues that consciousness can be understood as a virtual
machine running on a parallel neural computer.) This is visited again at the end
of the paper.
II: Scope of the Paper
A set of axioms is proposed which aim to define a minimal set of necessary mate-
rial conditions for cognitive mechanisms that would support consciousness in an
agent. For the purpose of this paper ‘agent’ is used in the sense of an active object
that can sense its environment, have a purpose, plan according to that purpose
and then choose to act purposefully. An agent can be biological, non- biological
and, indeed, virtual (i.e. software running on a host computer). However, casting
this material in the domain of physically realisable robot-like machinery is
exploited as a hook on which to hang the more general axiomatic theory. The axi-
oms are about perception, imagination, contemplation, planning, acting and
emotions. These and other related terms are to understood in their usual sense in
cognitive neuroscience. Hypotheses and predictions that arise from these axioms
are examined in the light of findings in biological systems. It will be seen that
these indicate that, in order to support consciousness, mechanisms need to be cel-
lular and influenced not only by external perceptual events, but also by the
actions needed to gather and organise the perceptual sensations. The collation of
axioms and corollaries leads to a final hypothesis about what objects may be said
to be conscious. Also, as this theory introduces a variant form of functionalism,
the paper concludes with a consideration of some philosophical issues that might
be perturbed by this approach.
III: Discerning Consciousness in an Agent
Given are two very similar artificially built agents (A and B) that perform a set of
tasks competently. Say that someone claims that one of these (A) is conscious
while the other (B) is pre-programmed using conventional computing techniques
to cover all eventualities that are likely to be encountered in the performance of
the task. Central to our theory is a question we simply call Q.
Q: Is there a set of tests that will substantiate the claim that (A) performs
consciously while (B) does not?
To answer this with even a rudimentary working definition D of ‘being con-
scious’ makes the point.
D: Being conscious is defined by ‘having a private sense: of an “out-there”
world, of a self, of contemplative planning and of the determination of
whether, when and how to act’.
2 I. ALEKSANDER AND B. DUNMALL
(We are fully aware that this will not satisfy those who are concerned with the
results of mirror tests on animals which distinguish between conscious organ-
isms which are ‘just’ conscious and those that are self-conscious. We take a par-
allel view to Baars, 2001, that all mammals are conscious. For us a reaction to a
mirror is merely evidence of an extra ability to express what we have called hav-
ing a private sense of ‘self’.)
There appear to be three major approaches for tackling Q: behavioural tests,
tests based on an introspective knowledge of consciousness and tests based on a
knowledge of mechanism and function. We discard behavioural tests as it has
already been said that the two agents perform roughly the same activity. It is also
generally accepted that while consciousness may be attributed to agents with an
impressive behaviour it cannot be unequivocally inferred from such behaviour.
Then, by definition, tests cannot be derived from an introspective feeling of con-
sciousness, as they have to be third-person assessments. While it can be fully
accepted that we will never be able to know what it is like to be a conscious agent
(Nagel, 1974), the real question is what mechanism does the conscious agent
need to possess for it to have a sensation of its own presence and affordances for
action in its world.
In this paper we take the view that this inevitably implies that it is only through
an assessment and analysis of the mechanisms of the organism that we can con-
clude whether such mechanisms can support consciousness or not within that
IV: The Axiomatic Approach
The axioms of this theory are abstractions drawn from, D, the rudimentary defi-
nition of consciousness. In section IX this rudimentary definition is then
reviewed in the light of the axioms. As with many axiomatic theories, the
real-world problem acts as a stimulator for the origination of axioms which are
then examined as abstractions for the generation of hypotheses and theorems that
may be reflected back into the real-world domain. Almost incidentally these
abstractions may be implemented in the design of artificially conscious agents.
In particular, they are candidates for a series of tests directed at the conscious/
non-conscious agent question. We leave until the end of this paper setting this
approach against extant theories about consciousness. The details of the axiom-
atic approach are discussed next: we first set out the axioms without explanation
and then examine each in greater detail.
Let Abe an agent in a sensorily-accessible world S. For Ato be conscious of S
it is necessary that:
Axiom 1 (Depiction):
Ahas perceptual states that depict parts of S.
Axiom 2 (Imagination):
Ahas internal imaginational states that recall parts of Sor fabricate S-like
AXIOMS AND TESTS FOR MINIMAL CONSCIOUSNESS 3
Axiom 3 (Attention):
Ais capable of selecting which parts of Sto depict or what to imagine.
Axiom 4 (Planning):
Ahas means of control over imaginational state sequences to plan actions.
Axiom 5 (Emotion):
Ahas additional affective states that evaluate planned actions and determine
the ensuing action .
In the rest of this paper we show the theoretical and functional characteristics
that are implied by these axioms. It should be said that Sis defined by the senses
available to the agent. In humans this adopts the broadest character: it refers not
only to the world that can be accessed through all external sensory modalities,
but also to physical bodily events that could be internally generated such as
heartburn, satiation, thirst, pain, and so on. In an agent equipped only with vision
and audition (say) Swould be only an auditory and visual world.
Axiom 1: A has perceptual states that depict parts of S
We shall pursue the notion that Sis a conjunction of minimally perceivable
events (where an event is understood as a change in the world). This is familiar in
vision, for example, where it is possible to build a visual world out of dots as is
done in newspapers or television. We further assume that there is an ‘energy’
involved in any event that can be perceived. In general terms we define a mini-
mally perceivable event *Sjas a minimal element of the sensory world Swhich,
were it to have less energy, would not be perceived at all. That is, *Sjindicates
the minimum event, the smallest change in the world of which an organism can
become conscious. We then write Sas:
where Ujis an assembly (or, in logic, a conjunction) over all values of j, that is of
all the minimal events of which the organism could become conscious.
Now we assume that just one minimally perceivable element in Shas changed.
In order for this to be perceived, some functional support in Ahas to change. We
define this by means of a corollary related to Axiom 1.
COROLLARY 1.1: To be conscious of *Sj,it is necessary for Ato have a func-
tional support, say N(*Sj).
By functional support we mean that the state of some part of the machinery of A
must have changed. In other words, for an organism to perceive a minimal event
in the world requires a related physical change of some state variables of that
organism. In neurology N(*Sj) is the state of firing of neuronal groups. In the
abstract sense N(*Sj) is a unique set of values of a number of state variables
related to that event. For a different event, that is, a different j, a different set of
state variables would be affected.
4 I. ALEKSANDER AND B. DUNMALL
On notation: there could be conditions under which the state variables which
support N(*Sj) may fail to hold the world-related state. That is, the mechanisms
which are necessary for being conscious may not be operating in a conscious
mode. In this paper this applies to all cases where the notation N(x) is used.
Assuming a physical world of three spatial dimensions, jhas three spatial
co-ordinates (x,y,z). It is assumed that (x,y,z) = (0,0,0) is the ‘point of view’ of the
organism (or, in more philosophical terms, the location of the observing self),
giving all other world events ‘out there’ measurements at some distance from
this point of view. Clearly jneeds to be encoded in N(*Sj)in order to retain
‘out-thereness’ in the inner functional support and this leads to the next corollary.
COROLLARY 1.2: To achieve N(*Sj)it is necessary for Ato have a functional
support for the measurement of j. We call this measurement the ‘j-referent’.
In living organisms jis often provided by motor action such as the movement of
the eyes when foveating on a minimal visual event, or, in some animals, the abil-
ity to locate the source of an odour through head movement. In human audition,
phase differences and the shape of the ear locate sound. The general point made
in corollary 1.2 is that the encoding of the ‘where’ of a minimal event is as neces-
sary as the qualitative encoding of the sensory event itself in order to give this
encoding uniqueness. This leads to an important hypothesis.
HYPOTHESIS 1: It is sufficient for each state variable to be indexed by jfor the
grouping into N(*Sj)to be uniquely identified with *Sj, irrespective of the
physical location of the variables.
In other words the minimal event is fully encoded internally by the indexing of its
state variables irrespective of their physical location. This hypothesis has been
discussed elsewhere (Aleksander and Dunmall, 2000). It resolves the ‘binding’
problem by eliminating it. So in the primate visual system this suggests that neu-
rons in disparate areas such as V3, V4 and V5 may combine to support a single
minimal conscious event (e.g. the perception of a small red diamond moving in
some direction) with no binding other than the j-indexing of specific neurons.
This also leads to a prediction.
PREDICTION 1.1: j-indexed neurons will be found in living conscious
This prediction should be seen in retrospect as motor-indexed neurons have actu-
ally been found in primates since 1989 starting with the work of Galletti and
Battaglini (1989), followed by many others. In fact it is quite remarkable how
widely spread such discoveries are in the brain: gaze indexing in V3, arm motor
indexing in V6, head position indexing in the parietal cortex, and so on.
Given all this it is now possible to define N(S),adepiction of S, as the func-
tional support required to be conscious of S, where
This suggests that all the values of jcan be gathered in parallel which somewhat
contradicts the practicality of generating values of jthrough muscular action
AXIOMS AND TESTS FOR MINIMAL CONSCIOUSNESS 5
(such as eye movement). So in living systems, during a perceptual act, internally,
jhas a sequential nature j(t), which makes it necessary for N(*Sj(t))to have persis-
tence in time (see ‘Attention’).
In summary, the importance of Axiom 1 is that it suggests a method (the depic-
tion) of internal support for the sensation of an ‘out-there’ world. It is noted that
this strongly suggests a cellularity that determines minimally perceivable events.
The support indexes cells to encode the ‘out-there’ coordinates of minimal
events and needs to be compositional through cell interaction, to match the com-
position of non-minimal events in the world from minimal ones.
Axiom 2: A has internal imaginational states that recall parts of S or fabricate
Axiom 2 comes from the fact that any organism that is conscious must be con-
scious of imagined as well as perceived events. Here follow some implied mech-
anisms that support imagination.
COROLLARY 2.1: The state of a set of cells which supports conscious sensa-
tion has necessarily at least two components
where N(I)isaj-referenced set of distributed cell states of a dynamic system to
which significant states of N(S) are transferred (see corollary 2.2) becoming
attractors in this set of variables.
Attractors in N(I), in psychological language, would be called recalled memo-
ries of past experience, and N(I) in neurological language is a neural memory
site. So the process of acquiring experience that may be ‘replayed internally’ as
indicated in the above corollary is a sequence of collective cell states with input
N(S) and state N(I). It is suggested that if both are j-indexed they will automati-
cally overlap in consciousness. The next corollary suggests one way in which
this transfer may be achieved.
COROLLARY 2.2: One possible way for N(I) to inherit the j-referent is by
transfers from N(S)toN(I) through Iconic learning
Iconic learning (Aleksander, 1996, p. 122) in a cellular or neural system is the
process whereby the states of a network copy the pattern on the input variables of
the machine while learning is taking place. Learning is the process whereby each
cell associates with its own current state with the inputs derived from the outputs
of other cells in the system and external inputs.
We note that what we have described is a system of cells each of which deter-
mines its state from external input and the states of other cells. In automata the-
ory this is called a state machine.
6 I. ALEKSANDER AND B. DUNMALL
COROLLARY 2.3: Time and sequentiality are the properties of any state
machine, therefore N(I) can be the imagination of a sequence.
COROLLARY 2.4: N(S) is not the only input to the state machine. Other inputs
could be internal emotions E(see ‘Emotion’), internal sensations H(such as
pain in living organisms) or verbal input V(in human systems). In very gen-
eral terms then the state equation for Imagination becomes
where N(I)'is the next internal state to N(I) and ‘x’ indicates a Cartesian product,
that is a combined influence of E,H,V…
This means that imagination sequences may be initiated from a variety of
inputs. For example, a sensory state that indicates that a problem has to be solved
(a ball is rolling off a table and has to be caught), a state of hunger that needs to be
satisfied, or a verbal request has to be understood and acted upon.
In summary, the concept of imagination that is thought to be essential for an
agent to be conscious, has been linked to the necessity for a state-machine struc-
ture whose states inherit the ‘out-thereness’ afforded by j-referencing at its con-
Axiom 3: A is capable of selecting which parts of S to depict or what to
Attention is necessary whenever the build-up of N(S) from Scannot be done in
parallel. It addresses the question of what generates the j-referent. It is a search
process made necessary by any restriction in the degree of parallelism which is
available at the input of a system. It is initially driven by agent actions. These
actions create the j-referents that enable N(S) to be internally reconstituted. It
applies to N(I) as well as a way of bringing detail into consciousness. That is,
there is a N(*Ij) extracted from N(I) which of necessity parallels the N(*Sj)
related to N(S).
COROLLARY 3.1: Attention is a search process made necessary by any restric-
tion in the capacity of a sensory input channel. It is initially driven by agent
actions to make depiction as complete as possible. These actions create the
j-referents that enable N(S) to be internally reconstituted.
The best known example of external attention — i.e. applied to N(S)—iseye
movement and foveation. This draws attention to the fact that j-referencing can
take place for a group of minimal events simultaneously (i.e. whatever is
foveated for one eye position).
COROLLARY 3.2: Attention is a search process that can be context independ-
ent or context dependent on Sor N(S) or N(I).
COROLLARY 3.3: Attention with respect to N(I), attention is the inverse pro-
cess to depictive construction.
AXIOMS AND TESTS FOR MINIMAL CONSCIOUSNESS 7
Again in primate vision, these three modes can be observed. The neurological
system that involves the frontal eye fields and the superior colliculus is the evi-
dence that all three modes indicated by corollaries 3.1, 3.2, 3.3 are possible for
perception. This is the system which acts directly on eye muscles, receives input
directly from the retina as well as higher visual areas. It moves the eye to areas of
high change in the retinal image, or, when it receives inputs from higher areas so
it can move the eyes to where important information is predicted to occur.
HYPOTHESIS 2: Inner attention uses the j-referent mechanisms that generate
N(S) to select parts of N(I).
In other words, selection mechanisms which drive action in external attention
(such as the activation of gaze locked cells in vision) are at work when attending
to imagined sensations. They (i.e. the j-referents) may be a crucial indexing set of
signals which position elements of N(I) in the world even when the world is not
being perceived. Actuators may or may not be stimulated. If they are not, a clear
vetoing mechanism needs to be found.
Axiom 4: A has means of control over imaginational state sequences to plan
So far N(I) has been described as a passive recall process, possibly indexed by
some sort of need. Here we see the same state machine mechanisms working
against needs that eventually lead to motion or other forms of action on the part
of the agent.
DEFINITION 4.1: Planning is the traversing of trajectories in N(I) under the
input control of a need, or a desire, or no input at all, but in all cases leading
to the selection of one among several possible outward actions.
COROLLARY 4.1: During exploration, action results in j-referents, to have
planning, it is necessary for the j-referents linked to state trajectories in N(I)
also to link to potential action sequences.
The implication of this definition and corollary is that a trajectory in N(I)isa
depiction of ‘if I do Y will it result in X ?’ where X is a need or a desire and Y is a
set of j-referents on the input of the state machine with the N(I) states. However
this allows that X need not be fully defined, leading to something that could be
called internal brainstorming. The importance of the role of the j-referent is
stressed, as 4.1 closes the loop between exploratory mechanisms, the accumula-
tion of experience, and the use of this experience in generating actions. The need
for ‘gating’ (see ‘Emotions’) is also highlighted, in the sense that plans need to be
‘approved’ before they lead to action, as will be seen in the next section. This
gating has been postulated in primate neurology in cortico-thalamic loops that
are ‘vetoed’ by the action of the anterior cingulate areas (4).
In summary, planning is the primary task for the N(I) state machine, and good
planning is a property which aids evolutionary survival.
8 I. ALEKSANDER AND B. DUNMALL
Axiom 5: A has additional affective states that evaluate planned actions and
determine the ensuing action .
As agreed by many commentators (e.g. Freeman, 1999), emotion is not just a
passive change of internal state in response to perceptual events. It is closely
linked to the perception–planning–action link.
Plans need to be evaluated, at times very quickly. This implies a mapping from
a prediction in N(I). Indeed, to recognise that emotion may be constantly active
in consciousness, the following corollary extends corollary 2.1 as follows.
COROLLARY 5.1: The total support of consciousness N(C) contains an emo-
tional element N(E) where this is an evaluation of the states of N(I). That is,
N(C)=N(S), N(I), N(E)
febeing an evaluative function operating on N(I).
We note that femay be either inborn (say, fear and pleasure) or more refined
through being acquired through experience to a greater or lesser extent (envy,
appreciation of beauty).
X: The Consciousness Hypothesis and its Implication
The rudimentary definition D given at the beginning of this paper, uses the words
‘having a private sense of …’. The five axioms and their corollaries have sug-
gested the necessity of specific mechanisms without which an agent’s conscious-
ness would be incomplete or not existing at all. The axioms all present necessary
conditions while it could be argued that they are not sufficient. The key lack of
sufficiency lies (as in Chalmers’ argument, for example) that an additional fac-
tor, say factor F, is required to map from N(C) into C, the sensation of being con-
scious. In our terminology:
where Fis a mapping which (some argue, see XI below) cannot be defined using
In contrast with this, the axiomatic approach suggests the following
HYPOTHESIS 3: For every event in Cthere is a corresponding event in N(C).
That is, in the formulation
the mapping Fis one to one making Ca complete description of N(C) and N(C)a
complete description of C.
The implication of this is that in order to have a complete theory of con-
sciousness Cit is sufficient to have a complete theory of N(C).
AXIOMS AND TESTS FOR MINIMAL CONSCIOUSNESS 9
XI: A Philosophical Note
At first sight the 1:1 relationship in hypothesis 3 appears to be classically
reductionist. However we have taken care not to eliminate F(some fixed rela-
tionship) in this hypothesis by recognising the difference between Cand N(C).
We are merely arguing that any theoretical prediction of an event at the
subjective level must be formulated at the functional level. Similarly if an
explanation is required of a reported event at the subjective level a science of
consciousness will find it at the functional level. Therefore a ‘science of con-
sciousness’ if properly conducted at the functional level will also be a science of
the subjective nature of consciousness. In other words, we are not arguing that
the sensation of pain is the same as saying that some specific neurons are firing.
What we are saying is that the only science that concerns pain is the science that
relates to its neural aspect.
We also avoid the expansionist viewpoint of Chalmers (1996) in the sense that
we are not saying that N(C)causes or generates some prime event in some space
of fundamental, non-physical subjective Clevel. We accept these subjective
events but do not accept that they require any science which lies over and above
that of the functional level. We therefore accept the special nature of the first per-
son sensation, but argue not that it requires an as yet undiscovered science or link
to the functional, but that it is merely a reflection of the functional. In this we
present ideas that are similar to those of Velmans’ (2000) ‘reflexivity’ of con-
sciousness. To provide an analogy that plays on the ‘reflection’ comment, con-
sider the virtual world of reflections in mirrors which only requires the physics of
the tangible physical world to explain any event that may be observed in the
reflected virtual world.
With respect to physiological theories, we stand far aside from the sequence of
Penrose’s (1994) and Hameroff’s (1994) attacks on computational theories by
requiring a quantum source to explain the coexistence of mental states, as when
contemplating a problem and visualising its solution simultaneously. We have
given a counterexample of this in Aleksander and Dunmall (2000), where an
imagined state can coexist with a perceptual state through the concept of the
j-referent of corollary 2.1. We would argue that the axiomatic approach in this
paper is thoroughly computational.
We do, however, side with the physiological theories of Crick and Koch
(1995) in the sense that all sensation has a neural correlate. We differ slightly
only in not seeing binding between neural areas as being crucial for superimposi-
tions of neural activities in different brain areas to create a coherent sensation as
indicated in Hypothesis 1, which, also, is based on the concept of the j-referent of
XII: Conclusion: Consciousness in Machines
In the implication that follows Hypothesis 3, it is necessary to ask what is meant
by ‘a complete theory of N(C)’ ? The answer relates back to the five axioms
which suggest that it is necessary for certain mechanisms to be present in an
10 I. ALEKSANDER AND B. DUNMALL
agent to create a form (albeit minimal) of consciousness. These mechanisms are
recognisable through an analysis of the structural (anatomical in living organ-
isms) and functional (neurological in living organisms) characteristics of a given
agent. Such analysis is couched in terms that are computationally realisable. This
implies that it is possible to transfer the understood mechanisms into computa-
tional agents and thus create systems with a ‘machine’ consciousness.
Of course, any theories of N(C), would not be complete at any point in time.
Indeed ‘completeness’ may be something that can only be approached asymptot-
ically. However, whatever the state of understanding of N(C) is at any point in
time, the contention of this paper is that it is machine testable by methods such as
The primary understanding of N(C) is likely to come from neurology. An
advantage of transferring this to machines, is that hypotheses in neurology can be
tested by computation. This was the procedure used in our work on the mecha-
nisms of visual awareness (Aleksander and Dunmall, 2000) and is currently
being applied to the discovery of the cause of visual memory deficits among suf-
ferers of Parkinson’s disease. Also elements of N(C) may be directly applied to
machinery as is currently being done to enable a mobile robot with vision to
develop a sense of visual awareness.
But in addition to what is known in neurology, the transfer of mechanisms into
computation allows knowledge to be added to theories of N(C). A most impor-
tant feature of a neuromodel is that the content of the conscious experience can
be decoded and displayed on the screen (due to Axiom 1 — Depiction). This is a
function of the structural knowledge that one has of the connections and coding
in the machine. It is unlikely that with the current state of brain scanning systems
such a feat could be done for living organisms. It may become, however, become
possible in the future at which point the idea that Fis more than a 1:1 relationship
In the preamble to the paper we drew attention to the fact that some commenta-
tors on consciousness will see the C, which is at the focus of the axioms, as being
something different from their concept of consciousness. They argue that this
therefore begs the question of whether the mechanisms in this paper are relevant
to whatever it is that they call consciousness. However, they would go on to agree
that a rock or a computer is not conscious while a human being is. A rock or an
unprogrammed computer has none of the axiomatic mechanisms while a bat or a
human being might have. We argue therefore that some correlation exists
between the presence of consciousness in objects that are reasonably well agreed
to be conscious and the existence of the axiomatic mechanisms. However, as
mentioned in our preamble, were there to exist a virtual machine in the host com-
puter, and if the presence of the five axiomatic mechanisms could be demon-
strated, in what sense could the virtual machine be said to be conscious? The
axioms would lead us to believe that the computer could either be conscious of a
virtual world resident in the computer or of a real world were the computer be the
‘brain’ of a robot. An example of a non-conscious object may be found in some
recently built robots in our laboratory (which sound like not a great
AXIOMS AND TESTS FOR MINIMAL CONSCIOUSNESS 11
achievement). At best one can identify in them the mechanisms that accord to
just the first three axioms. Are they conscious? Well, our approach would say
that they are not, but given a development or evolution of the remaining two axi-
omatic mechanisms, what arguments could be used to deny them consciousness?
So, here too, we argue that the axiomatic approach issues a challenge to its
detractors to provide at least a logical if not axiomatic argument as to why con-
sciousness is something unrelated to the axioms in this paper.
Aleksander, Igor (1996), Impossible Minds: My Neurons, My Consciousness (London: Imperial
Aleksander, Igor and Dunmall, Barry (2000), ‘An extention to the hypothesis of the asynchrony of
visual consciousness’, Proc R Soc Lond B, 267, pp. 197–200.
Baars, Bernard (2001), ‘Surrounded by consciousness’, Paper presented at conference Towards a
Science of Consiousness, Skovde, Sweden.
Chalmers, David J. (1996), The Conscious Mind: In Search of a Fundamental Theory (Oxford
Dennett, Daniel (1993), Consciousness Explained (London: Penguin).
Crick, Francis and Koch, Christopher (1995) ‘Are we aware of neural activity in primary visual
cortex?’, Nature,375, pp 121–33.
Freeman, Walter J. (1999), How Brains Make Up Their Minds (London: Weidenfeld and Nicolson).
Galletti, Carlo and Battaglini Paolo (1989), ‘Gaze-dependent visual neurons in Area V3A of mon-
key prestriate cortex’, Journal of Neuroscience,6, pp. 1112–25.
Hameroff, Stuart (1994), ‘Quantum coherence in microtubules: A neural basis for emergent con-
sciousness?’ Journal of Consciousness Studies,1(1), pp. 91–118.
Nagel, Thomas (1974), ‘What is it like to be a bat?’ Philosophical Revue,83, pp. 435–50.
Penrose, Roger (1994), Shadows of the Mind: A Search for the Missing Theory of Consciousness
(Oxford: Oxford University Press).
Velmans, Max (2000), Understanding Consciousness (London: Routledge).
12 I. ALEKSANDER AND B. DUNMALL