Content uploaded by Igor Aleksander
Author content
All content in this area was uploaded by Igor Aleksander on Jan 30, 2014
Content may be subject to copyright.
Integrative Approaches to Machine Consciousness
5
th
- 6
th
April 2006
Organisers
Rob Clowes, University of Sussex
Ron Chrisley, University of Sussex
Steve Torrance, Middlesex University
Programme Committee
Igor Aleksander, Imperial College Lond.
Giovanna Colombetti, York University
Rodney Cotterill, Technical University of
Denmark
Frédéric Kaplan, Sony Computer Science
Laboratory
Pentti Haikonen, Nokia Research Center
Germund Hesslow, Lund University
Owen Holland, University of Essex
Takashi Ikegami, University of Tokyo
Miguel Salichs, University Carlos III
Ricardo Sanz, Polytechnic University of
Madrid
Murray Shanahan, Imperial College Lon-
don
Jun Tani, Brain Science Institute
Steve Torrance, University of Middlesex
Tom Ziemke, University of Skövde
Contents
On Architectures for Synthetic Phenomenology 108
Igor Aleksander, Helen Morton
Correlation, Explanation and Consciousness
116
Margaret Boden
The Problem of Inner Speech and its relation to the Organization of Conscious Experience: a
Self-Regulation Model.
117
Robert Clowes
Playing to be Mindful (Remedies for Chronic Boxology)
127
Ezequiel Di Paolo
The XML Approach to Synthetic Phenomenology
128
David Gamez
The Embodied Machine: Autonomy, Imagination and Artificial Agents
136
Nivedita Gangopadhyay
Towards Streams of Consciousness; Implementing Inner Speech
144
Pentti O A Haikonen
Could a Robot have a Subjective Point of View?
150
Julian Kiverstein
Acting and Being Aware
152
Jacques Penders
Using Emotions on Autonomous Agents. The Role of Happiness, Sadness and Fear.
157
Miguel Angel Salichs, Maria Malfaz
Towards a Computational Account of Reflexive Consciousness
165
Murray Shanahan
How to experience the world: some not so simple ways
171
Aaron Sloman
Machine Consciousness and Machine Ethics
173
Steve Torrance
107
On Architectures for
Synthetic Phenomenology
Abstract
Is synthetic phenomenology a valid concept? In approaching consciousness from a computational
point of view, the question of phenomenology is not often explicitly addressed. In this paper we re-
view the use of phenomenology as a philosophical and a cognitive construct in order to have a
meaningful transfer of the concept into the computational domain. Two architectures are discussed
with respect to these definitions: our ‘kernel, axiomatic’ structure and the widely quoted ‘Global
Workspace’ scheme. The conclusion suggests that architectures with phenomenal properties genu-
inely address the issue of modelling consciousness and indicate and the way that a machine with
synthetic phenomenology may benefit from the property
1 Introduction
In searching for computational models of being
conscious, the detailed nature of internal represen-
tation is an important facet of the way that model-
ling is to be approached. Synthetic phenomenol-
ogy is involved when two conditions are fulfilled:
first there is a meaningful sense in which a first
person may be ascribed to the model and second,
when the architecture caters for an explicitable and
action-usable representation of “the way things
seem” within the machine. We take the view that
rather than this being an idealist stance, it repre-
sents as close an approximation to “the way things
are” as is permitted by the sensory apparatus of
that organism. This is assumed to be sufficiently
close to reality to enable the organism to take ap-
propriate action in its world. So one expects to
find accurate phenomenological representation in
successfully evolved organisms, as a major dis-
tance between the representation and reality does
not augur well for successful evolution.
The paper first reviews the reason that in phi-
losophy, phenomenology had a firm foothold de-
spite the fact that the appellative became used in a
variety of ways. A brief discussion is included on
Block’s use of the word in the notion of ‘Phe-
nomenal consciousness’ as being distinct from
‘Access consciousness’ and, particularly in the
way that such concepts could feature in computa-
tional systems.
The concept of a ‘depictive’ representa-
tion is developed in this paper beyond that which
has been discussed to date (Aleksander, 2005) to
show that this is a central requirement for an archi-
tecture that could be said to be synthetically phe-
nomenological. A set of architectural definitions
is then developed that determines whether an ar-
chitecture could be said to be phenomenological or
not. Two known architectures are scrutinised from
the point of view of these definitions: are own ker-
nel architecture (Aleksander, 2005) and
Shanahan’s embodied version of Baars’ Global
Workspace architecture (Shanahan, 2005). This
reveals that the issue of phenomenology can be
considered for differing mechanistic descriptions,
of which the two architectures are distinct exam-
ples. In the conclusion we argue that the material
in the paper indicates that architectures that are
phenomenological have characteristics of being
conscious that enhance their use both as explana-
tory tools and, possibly, functional artefacts. We
shall first review issues that go under the heading
of Phenomenology and italicise strands that are
Igor Aleksander
Dept. Of Electrical and Electronic
Engineering,
Imperial College , London SW7 2BT
i.aleksander@imperial.ac.uk
Helen Morton
School of Social Sciences and Law
Brunel University, Uxbridge UB83PH
Also, Imperial College , London SW7 2BT
helen.morton@brunel.ac.uk
108
taken up in discussing the implication for synthetic
systems and their architectures discussed later in
the paper.
2 Phenomenology
2.1 Definition
In the broadest terms, phenomenology is the word
given to studies of consciousness which specifi-
cally start with the first person. In other words,
introspection is an important facet of the discus-
sion. This distinguishes phenomenology from
other forms of philosophy, say, ontology, which
asks what it is for an object to be conscious. One
should also distinguish ‘a phenomenon’ from other
philosophical constructs such as ‘qualia’ which
relate to sensational primitives such as ‘redness’ or
‘the sweet smell of a rose’. In general, phenome-
nologists like to extend the definition beyond the
immediate sensation to more compositional struc-
tures of experience such as enjoying a game of
tennis or the experience of having tried a new res-
taurant. This also aids action in the world and the
generation of descriptive language in the case of
humans or human-like machines.
Conforming with the above definition, the
‘kernel’ architecture we shall discuss in this paper
was synthesised through a process of using intro-
spection to discover design principles. This led to
a consideration of ways that this work contributes
to the formation of a synthetic phenomenology
paradigm.
2.2 Past Usage
It is noted that in the history of philosophy, phe-
nomenology is sometimes treated as the study of
consciousness itself. For Franz Brentano (1874
trans. 1995) phenomena are acts of consciousness,
they are the contents of mind. They stand in rela-
tion to physical phenomena that are perceived in
the world by intentionally creating meaning of
physical elements of the world in the mind. This
first-person, descriptive character of a phenome-
non has remained the hallmark of the work of later
phenomenologists. Of these, Edmund Husserl
(1913 trans. 1989), also focuses on the meanings
the mind creates when contemplating the real
world. This position addresses the mental object
beyond just its real-world shape. So a stick may
have the ability to dislodge a banana off the branch
of a tree, enhancing the phenomenology of the
stick by a mental vignette of the action of dislodg-
ing the banana.
Martin Heidegger (1975, trans. 1982) main-
tained that setting ontology (what it is to be con-
scious) apart from phenomenology could be an
error. He suggests that it is actually linked to the
phenomenology of the first person sensation of
being a self in an external world. See the influence
of this in what we shall call ‘axiom 1’. Given Sar-
tre’s socio-philosophical observations on phe-
nomenology as a literary examination of one’s
own experience and Maurice Merlau-Ponty’s link-
ing of phenomenology to personal experiences of
one’s own body (1945, trans. 1996) this becomes
important particularly for those who discuss con-
sciousness in the context of embodied robots.
The body’s muscular activity is a key element
in the ‘kernel’ architecture to create ‘depictions’,
that is sensations of being an entity in an out-there
world. As will be seen, Shanahan argues that em-
bodiment is essential to have an experiencer.
2.3 Materialist Concerns
Gilbert Ryle in Concept of Mind (1949) argued
that linguistic descriptions of mental states are a
direct way of expressing phenomenology. This
was possibly erroneously discredited by many
materialists who identified the mental state with
the neural state. Clearly only some neural states
support phenomenology as identified by Crick and
Koch (2003). Only some parts of the entire neural
state are responsible for personal sensation, the
parts that are not, have been called by the authors
the ‘Zombie’ regions of the brain. This appears to
beg the question of how one distinguishes a neu-
ron that contributes to conscious sensation from
one that does not. A possible answer was devel-
oped by Aleksander and Dunmall (2003) and Alek-
sander (2005). This draws attention to the fact
that in the visual system only some neurons, those
indexed by the motor areas of the brain, can fire in
a way that correlates with elements of the visual
sensation of being an entity in an ‘out-there’
world. This is summarised later in this paper.
2.4 Access and Phenomenal Aspects
Ned Block (1995) has identified at least two sali-
ent functions of consciousness. The first he calls
‘phenomenal’ or P-consciousness to indicate the
personal function of experiencing a mental state.
He contrasts this with ‘Access’ or A-
consciousness which is that function of conscious-
ness which is available for use in reasoning, being
‘poised’ for action and the generation of language.
Although he argues that both are present most of
the time, conflating the two when studying con-
sciousness is a severe error. Some evidence of
Block’s distinct forms of is drawn from the phe-
nomenon of ‘blindsight’ where individuals with a
damaged primary visual cortex can respond to
input without reporting an experience of the input.
109
This is A without P. P without A is the effect that
some unattended experience had happened previ-
ously (e.g. a clock striking) but the individual had
only realised this later. That is, P without A covers
the case that unattended input can be retrieved.
This creates a mechanistic difficulty for the defini-
tion of phenomenal consciousness as, were it never
to be brought into access format, it could not in
any way be described as ‘the way things seem’. In
hard-nosed synthetic phenomenology it might be
politic to concentrate only on things that have
seemed or seem to be like something.
This implies that in architectures it is impor-
tant to be clear about the way in which immediate
perceptual consciousness interacts with awareness
of past experience, which bears on the A/P dis-
cussion.
Blindsight has also entered the theories of ‘en-
acted’ vision proposed by Kevin O’Regan and
Alva Noë (2001) who have broadly argued that
‘representing’ the visual world in any architecture,
living or synthetic, is an error, as the world itself is
representation enough for the system to act on in a
physical way. Consciousness is then a ‘breaking
into’ this somewhat reactive, autonomic process
through mechanisms of attention.
It is known that in the brain there are uncon-
scious sensorimotor processes of the O’Regan and
Noë description that work in conjunction with con-
scious phenomenal processes. For example the
oculo-motor loop that involves the superior col-
liculus is such a mechanism. We are not conscious
of the retinal maps that are projected onto the su-
perior colliculus. They lead, also unconsciously, to
the saliency maps that partly determine eye move-
ment which eventually leads to reconstructions of
world-fixed representations much deeper in the
visual cortex (the extrastriate regions according to
Crck and Koch, 2003). The enacted-
unconscious/depicted-conscious interaction is a
useful concept that may be used in synthetic sys-
tems. We find it difficult to accept the ‘hard’ sen-
sorimotor view that complete access to a visual
world can be achieved without any phenomenal
representation at all.
3. Phenomenology in Computa-
tional Models
There are two important computational issues we
wish to stress here. The first is the nature of a
third-person design of an object that is capable of
first-person representation, and the second is the
relationship of depiction to synthetic phenolenol-
ogy.
3.1 The Third Person Design with First
Person Within It.
Where, in philosophy, phenomenology starts with
the first person sensation, we suggest that in com-
putational modelling, a phenomenological model
must, in the broadest terms, sustain representa-
tions that have first person properties for the model
itself. There is no dualist slight of hand here as the
designer of the system can happily retain a third-
person view of what is being designed, given a
theory of what in the design is necessary to
achieve a first person for the mechanism. That is,
despite starting with our own first-person sense,
we can speak of the first person of others. Simi-
larly, we can speak of the first person of a machine
and, indeed, set out to search for mechanisms of
such. This implies that, in vision, for example,
there is a need to differentiate mechanisms that
mediate the sense of presence of the organism in
the world from those that are due to previous ex-
perience: memory of various kinds and
imagination (for example, states induced by
literature). That is, there needs to be
computational clarity about how a first-person
phenomenal state relates to the current world
event, how meaning is assigned to this, how
meaningful states arise even in the absence of
meaningful sensory input and how a personal sen-
sation of decisions about ‘what to do next’ can
arise. In Aleksander and Dunmall, 2003 and
Aleksander 2005 we have referred to a necessary
property for the machine having a first person at
all as being a ‘depiction’. Here we set out this
concept as a logical sequence.
3.2 Depiction and Phenomenology.
It is useful to define what we mean by a syntheti-
cally phenomenological system.
Def 1: To be synthetically phenomenol-
ogical, a system S must contain machinery
that represents what the world and the sys-
tem S within it seem like, from the point of
view of S.
The word seem has been transferred from the phra-
seology of the earlier parts of this paper to stress
that perfect knowledge of the world cannot be
achieved if only because of the weaknesses of
sensory transducers. But, it is stressed that living
creatures, if we believe that they have phenome-
nological representations, will come to our notice
only through successful evolution. Again we
stress that this is due to some sufficiency in the
similarity between what things seem like and how,
in a sense important to the organism, they not only
seem like but, as far as the organism is concerned,
they are. To achieve this it is necessary that such a
representation should fully compensate for trans-
110
ducer and body mobility. In earlier work we have
called this a ‘depiction’ rather than a representa-
tion. To advance this prior work we develop a se-
ries of definitions and assertions about depictions
that positions this work within the framework of
phenomenology addressed earlier.
Def 2: A depiction is a state in system S
that represents, as accurately as required by
the purposes of S, the world from a virtual
point of view within S.
Assertion 1: A depiction of Def. 2 defines the
mechanism that is necessary to satisfy that a sys-
tem be synthetically phenomenological according
to Def. 1.
Assertion 2: If S is mobile and has mobile
sensors, a depiction of Def. 2 can only be achieved
if the mobile nature of S is combined with the in-
formation carried by the sensors. That is the
‘where’ of the elements of the world needs to be
predicated on the ‘body’ parameters of S. (In vi-
sion, eye-movement clearly needs to be compen-
sated to achieve a depiction).
Assertion 3: ‘As accurately as required ..’ in
Def. 2, indicates that, given effectors with which
to act on the world, the depiction should carry all
the information needed for such effectors to be
successfully deployed on the attended and desired
elements of the world.
Assertion 4: ‘As accurately as required ..’ also
sets determines the granularity with which the de-
piction may be achieved.
Assertion 5: While Def. 2 makes no call on a
topological representation, it does require that dif-
ferently positioned elements within the representa-
tion be indexed by the predicates introduced in
assertion 2. In animal vision it is known that dif-
ferent attributes of a visual element (e.g. the colour
and motion of a dot) are represented in different
parts of the brain. What ‘binds’ them in our analy-
sis is the indexing as clarified in the example be-
low (see Aleksander and Dunmall, 2000).
Example of indexing
: Participant X is fixating a
cross in the centre of a screen. She is asked to
identify the shape s and colour c of an object that
will appear briefly on some other part of the moni-
tor screen. Shape is represented in area P of her
brain and colour in area Q. The eye driven by the
superior colliculus will saccade to the position of
the object. The signal issued by the eye movement
is, say, a 2-dimentional vector v
. Then the depic-
tion in P will be s, indexed by v, say s
v
. Similarly,
in Q we have c
v
. Assertion 5 states that the bind-
ing of s and c is due to the common indexing by v:
that is, (s,c)
v
.
It is the deeper contention of the depictive ap-
proach that (s,c)
v
uniquely encodes X’s phenome-
nal experience of the appeared object. Of course,
away from this experimental example, the index-
ing, as indicated by a great deal of physiological
evidence (e.g. Galletti & Battaglini, 1989) occurs
over many areas of the cortex, giving the phe-
nomenal experience of one sensory modality sev-
eral dimensions possibly bound across modality
boudaries. Touch together with vision are a com-
monly bound experience.
4. Architectures
By ‘architecture’ we refer to a structure that first,
is made of several internal parts each of which
performs a specified distinct function, and second,
includes a full specification of the interconnections
among these parts the inputs and a variety of out-
puts (e.g. language generators, physical actuators
etc..). It is the contention of this paper that there
exists a set of architectures that can support phe-
nomenology for the organism that embodies the
architecture. We shall first look at two specific
architectures to assess some of the definitional
material presented in section 3.
4.1 The ‘Kernel’ Architecture
It is hardly a coincidence that a prototypical archi-
tecture we have recently suggested (Aleksander,
2005) should be based on the notion of a depiction
and can, therefore, be said to have phenomenal
consciousness according to our criteria. We take a
closer look at this scheme that is shown in Fig.1
Figure 1. The ‘kernel’ architecture.
This architecture is based on the axioms of con-
sciousness published in Aleksander and Dunmall
(2003). For completeness, they are briefly listed in
the Appendix of this paper.
These axioms start from a phenomenological
standpoint as they are derived through an intro-
P
M
E
A
111
spective decomposition of the most significantly
felt aspects of being conscious. Then it has been
argued that the decomposition eases the transfer of
these features into the synthetic domain.
Fig.1 is the result of this process. It consists of
five modules each of which is considered to be a
neural state machine (NSM) that operates in binary
mode. That is, each connection carries a binary
signal. We have ofen argued that any loss of gen-
erality due to the binary synthesis will be minor
with respect to the behaviours that are being re-
searched.
The binary NSM is specified as a six-tuple:
<Ci,Co,Cf,Ct, I, O, F, T>
n
where,
n is the module index,
Ci is a connection pattern of inputs (which may
come from other modules or sensory inputs);
Co is a connection pattern of outputs (to other
modules or system outputs);
Cf is the pattern of internal feedback connections.
Ct is the set of ‘teaching connections’ that deter-
mine the state of Co and Cf that becomes associ-
ated with Ci.
I, O, F, and T are the state sets of Ci,Co,Cf and Ct
respectively.
Then, in the usual way with neural state ma-
chines, the states of F(t) and O(t) become func-
tions of F (t-1) and I(t). These functions are de-
termined by a training strategy which is expressed
through T during a ‘training phase’.
For example, an ‘Iconic’ mode of training is
conventional with neural state machines of this
kind (Aleksander and Morton, 1995). This ensures
that, given that Ct and Cf have the same dimen-
sions and Co=Cf, the network learns F(t)=T(t) as a
function of I(t) and F(t-1).
Returning to Fig.1, the four axioms are imple-
mented as follows. P is a ‘Perceptual’ NSM
which is made to be phenomenological in the
sense of the earlier definitions of this paper
through the following design. The state F(t) is a
reconstruction of the sequences of attended world
inputs from sensory transducers over defined time
windows (sometimes sliding time windows). The
muscular effort required to attend to the elements
of the world is shown as the link from the action
NSM, A. In the animal visual system it is sur-
mised that attentional shifts are driven by saliency
maps in the superior colliculus. In specific studies
of the visual system, this has been modelled as an
additional part of the kernel architecture (See Igor
Aleksander et al. 2001
)
M is the memory and ‘imagination’ module. It
is connected to P in such a way that for every re-
construction in P, a state in M is created. Se-
quences of reconstructed states in P can therefore
be stored as state trajectories in M – they will have
inherited the depictive, hence phenomenal proper-
ties of P.
P and M together form what we have dubbed
‘the awareness areas’ of the architecture. In the
sense that one can perceive and recall at the same
time, the two areas both contribute to the same
phenomenal state The remaining modules of the
kernel architecture are not depictive, hence not
phenomenal, but add to the phenomenal existence
of the system in the following way. As mentioned,
A is the action area in which links between the
state trajectories of the phenomenal areas are
translated into action. But this is not automatic, it
is surmised that volition and emotion as imple-
mented in module E mediate this link. This was
the subject of the contribution by Aleksander, La-
hnstein and Lee in the AISB 2005 symposium on
machine consciousness.
In summary, the kernel architecture is based
ab initio on the intention of synthesising an archi-
tecture with phenomenological properties. This
has also been guided by those who like Crick and
Koch (2003) have been researching the neural cor-
relates of consciousness in living organisms. We
now consider a model that is more closely related
to computational approaches of the functional
kind.
4.2 Embodied Global Workspace
Bernard Baars’ (1988, 1997) Global Workspace
models have held sway in computational model-
ling of consciousness for some years. Baars con-
sidered how a large number of unconscious proc-
esses might collaborate to produce a continuum of
conscious experience. In very broad terms, he an-
swers the question through the architecture of Fig.
2.
Figure 2 A sketch of Baars’ Global workspace
architecture.
The separate processes, P1 to Pk, said to be uncon-
scious, compete to enter ‘The Global Workspace’.
P1 P2
Pj
Pk
Compet.
World
input
GLOBAL
WORKSPACE
Action
112
Such processes are often thought of as memory
activities, say, episodic memory, working memory
and so on. The competition is won by the process
that has the greatest saliency at a given moment.
This saliency is predicated by world input which
sets the context for the competition. Of course,
world input is also assumed to have direct influ-
ence on the unconscious processes P1 - Pk. Having
entered the global workspace, the winner of the
competition becomes the conscious state of the
system. This is continuously ‘broadacast’ back to
the originating processes that change their state
according to the conscious state. This results in a
new conscious state and so on, linking sensory
input to memory and the conscious state. It is both
general and useful for these separate processes to
be modelled as NSMs as was done for the kernel
architectures
Murray Shanahan (2005) points out that
modelling of a conscious organism cannot proceed
without that organism being embodied in some
palpable world. Using the above Global Work-
space model he argues that there can be no ‘ex-
periencer’ in GW unless the model takes account
of the “spatial unity of the body”. It is this
localisation in space that for Shanahan gives the
model its “viewpoint on the world” which
according to def. 1 makes it a candidate for
phenomenal consciousness. Shanahan argues that
denying this possibility, as is done by Block
(1995), revives the dualist stance, putting
phenomenal consciousness in the Chalmers-like
‘hard problem’ class, that is, a problem that cannot
be reduced to physical structure and hence cannot
be synthesized. And yet, the claimed ‘point of
view’ of the embodied organism is undoubtedly a
claim that this accords with definition 1 above of
a phenomenal system. In terms Block’s division
into access and phenomenal consciousness,
Shanahan implies that the embodied GW model
addresses access consciousness, treating the
phenomenal element as being an unnecessary
appeal to a dualistic concept.
4.3 GW and Synthetic Phenomenology
While it seems entirely correct that without em-
bodiment, GW does not include an experiencer,
the question remains of how the experience stream
in GW relates to the real world. We recall that in
section 3.2 we have argued that a synthetic phe-
nomenological system is achieved through a com-
positional representation of the world that is suffi-
ciently accurate for the system be able to use its
embodiment to control its world as accurately as
possible. That is, it is the contention of this paper
that depiction is the missing ingredient in making
GW phenomenal. That is, phenomenal conscious-
ness can occur in functional, physical systems, and
the implication for the embodied GW system is
that all the P1-Pk states need to be depictive for
the GW state to be truly a model of a conscious
state. Were this not the case, some translation into
depiction would have to go along with the winning
of the competition. Otherwise the spectre of
purely arbitrary representations in GW remains.
Shanahan is aware of this by requiring that the
conscious broadcast back to the competing proc-
esses be in some way intelligible to these proc-
esses. But this still makes it hard to see how the
states of the processes remain non-depictive when
the state of GW might be depictive.
5. Discussion
In this paper we have explored the concept of syn-
thetic phenomenology mainly by attempting to
define the necessary features of an architecture
that supports phenomenal consciousness within the
broadest definition of the term. We brought the
definitions to ground by considering two models
that might be candidates for possessing these fea-
tures. To conclude we raise and, using the mate-
rial of this paper, attempt to answer five general
questions that may be central to the existence of a
synthetic phenomenology. The first of these ad-
dresses the architectures presented in the paper.
Can non-depictive representations be phe-
nomenal?
It is the firm implication of this paper that this
cannot be the case. It is depiction in a functional
area which determines that the area contributes to
the phenomenal sensation of the organism. Were
this not the case, a human description of a state
would require translation into phenomenal terms
as such descriptions are of phenomena and not
encoded states.
What is the difference between ‘depictive ker-
nel’ and GW architectures in terms of synthetic
phenomenology?
Clearly the depictive kernel architecture was de-
signed with the purpose of creating a phenomenal
representation within the system according to the
definitions set out in this paper. This has the com-
putational advantage of being able to be display on
a screen the current phenomenal state of the ma-
chine enabling a designer’s assessment of the in-
teractions between both postulated conscious and
postulated unconscious mechanisms in the genera-
tion of the phenomenology. The rules used in the
synthesis involve depiction. Originally no phe-
nomenal claims were made for GW, particularly in
its practical form as synthesised by Stan Franklin
(2003). However with the embodied GW work of
Murray Shanahan, the question of the presence
synthetic phenomenal consciousness acquires a
113
new urgency. In this paper we have maintained
that were an architecture based on GW to have a
phenomenal character, there must be a depictive
activity in the processes that compete for entering
the global workspace if the system is to be phe-
nomenological. This creates problems as in our
scheme of things, depiction in an area of the archi-
tecture implies phenomenal consciousness and
GW sees the competing processes as being non-
conscious. Therefore a phenomenal GW implies
some sort of coming into consciousness in the GW
area for reasons other than depiction. These have
not yet been explained. Of course, the depiction
idea can be rejected, but if not depiction, then
what?
What is the use of synthetic phenomenology?
Given the difficulties mentioned with embodied
GW above, it is proper to ask why bother with
phenomenology and why not settle for just access
consciousness as implied by Shanahan (2005)? In
the arguments of the current paper, phenomenol-
ogy actually includes the purposes that are attrib-
uted to access consciousness. But such purposes
are explicit and searchable through attentional
mechanisms for reasons of accurate interaction
with the environment (see assertions 3 and 4).
This is not a Blockian confusion, but rather a sug-
gestion that there may not be as clear-cut a func-
tional/neurological distinction between access and
phenomenal consciousness as Block seems to sug-
gest. The A without P and P without A cases may
be extreme conditions of a central phenomenon. In
summary we argue that accurate interaction with,
and thought about the real world is the purpose of
phenomenology in a synthetic system.
Is synthetic phenomenology an oxymoron as
it is the non-physical experiential side of con-
sciousness and therefore eschews synthesis?
Everything we have submitted in this article is a
denial of the above proposition. Treating phe-
nomenology as the ‘hard’ part of consciousness
simply kicks it out of touch of science into some
mystical outfield. We maintain that addressing it
as a constructible concept removes the mysticism
with which it might otherwise be associated.
Is synthetic phenomenology an arbitrary de-
sign option for models of consciousness?
This paper regards models of consciousness with-
out synthetic phenomenology as being valid only
in a behavioural sense. That is, it is possible for a
model to be given attributes of being conscious
from its behaviour. Stan Franklin’s Intelligent
Distribution Agent (2003)is a good example of this
class of system. Users think that they are dealing
with an entity conscious of their needs. But if one
were to argue that an architecture throws light on
the mechanisms of consciousness in the brain it
becomes mandatory to include phenomenal, that is
depictive functions.
What research needs to be done in developing
architectures with synthetic phenomenology?
Referring to the kernel architecture there is much
work to be done on modes of interaction between
the modules. Current work includes a clarification
of the way the emotion module E controls the link
between the phenomenological P and M modules
and the non-phenomenological action module, A.
(fig. 1).
Illusions, ambiguous and ‘flipping’ figures are
situations where phenomenology and reality part
company. We are pursuing the mechanisms that, in
the kernel architecture, would lead to the kind of
perceptual instabilities associated with perceiving
the Necker cube. This underlines the usefulness of
synthetic phenomenology, as perceptual reversals
may be measured in the depictive machinery and
the conditions for such reversals studied. This is
revealing of the interaction between phenomenal
and non-phenomenal processes in the brain
In GW, architectures it would be interesting to
clarify the causes of phenomenology in the GW
area which are not present in the supporting com-
petitive processes.
Appendix: Axioms of Being Con-
scious.
This is an introspective partitioning of five impor-
tant aspects of being conscious
1. I feel as if I am at the focus of an out-
there world.
2. I can recall and imagine experiences of
feeling in an out there world.
3. My experiences in 2 are dictated by atten-
tion and attention is involved in recall.
4. I can imagine several ways of acting in
the future.
5. I can evaluate emotionally ways of acting
into the future in order to act in some
purposive way.
References
Igor Aleksander, The World In My Mind, My Mind
In The World’ Exeter: Imprint Academic,
2005.
Igor Aleksander, Mecedes Lahnstein, Rabinder
Lee: Will and Emotions: A Machine Model
that Shuns Illusions, Proc AISB 2005 Sym-
posium on New Generation Approaches to
Machine Consciousness, 2005
Igor Aleksander, and Barry Dunmall: Axioms and
Tests for the Presence of Minimal Con-
114
sciousness in Agents Journal of Conscious-
ness Studies. 10, pp 7-18, 2003
Igor Aleksander, Helen Morton and Barry Dun-
mall Seeing is Believing. Proc. IWANN01,
Springer, 2001
Igor Aleksander, and Barry Dunmall: ). An exten-
sion to the Hypothesis of the Asynchrony of
Visual Consciousness, Proceedings of the
Royal Society ofLondon B 267: 200, 197–
200.
Igor Aleksander and Helen Morton, Introduction
to Neural Computing (2
nd
Edition), London:
Thomson Computer Press, 1995
Bernard Baars, In the Theater of Consciousness:
The Workspace of the Mind , New York: Ox-
ford University Press, 1997.
Bernard Baars, A Cognitive Theory of Consious-
ness , Cambridge: Cambridge University
Press, 1988.
Ned Block, On a Confusion about a function of
Consiousness, Behavioural and Brain Sci-
ences, 18, pp 227-287, 1995
Franz Brentano, Psychology from an Emptirical
Standpoint, Trans: Rancurello et al. Rout-
ledge, 1995, Orig in German 1874.
Francis Crick and Christof Koch, ‘A Framework
For Consciousness’ Nature Neuroscience ,6,
pp119 – 126, 2003 .
Stan Franklin, ‘IDA a Conscious Artifact?’ Jour-
nal of Consciousness Studies,10 (4-5), pp47-
66, (2003)
Claudio Galletti and Paolo Battaglini: Gaze-Dependent
Visual Neurons in Area V3A of Monkey Prestri-
ate Cortex. Journal of Neuroscience, 6, 1112-
1125, 1989
Martin Heidegger, The Basic Problems of Phe-
nomenology, Trans Hofstadter, Indiana Uni-
versity Press, Orig in German, 1975.
Edmund Husserl, Ideas:A General Introduction to
Pure Phenomenology, Trans. Boyce Gibson,
Collier, 1963. Orig in German, 1913.
Maurice Merlau-Ponty, Phenomenology of Per-
ception, Trans Smith, Rotledge 1996, Orig in
French, 1945.
Kevin O’Regan and Alva Noë, ., A Sensorimotor
account of vision and visual consciousness.
Brain and Behavioural Sciences, 24(5) 2001.
Gilbert Ryle, A Concept of Mind, London: Hut-
chinson’s, 1949.
Murray Shanahan, ‘Global Access, Embodiment
and the Conscious Subject’. Jour. Of Con-
sciousness Studies, 12, No 12, 2005 (in press)
115
Correlation, Explanation and Consciousness
Margaret Boden
Centre for Research in Cognitive Science
University of Sussex,
Falmer, Brighton, Sussex BN1 9QH, UK
maggieb@sussex.ac.uk
. Abstract
There’s a lot of excitement about brain-scanning evidence for brain/consciousness
correlations. Although the evidence is new, the idea isn't: Descartes formulated it nearly
400 years ago. However, he didn't regard mind-brain correlations as explanations – and
neither should we.
Mere correlation between events in two domains is not enough for the one to be used as
an explanation of the other. In addition, we need systematicity, isomorphism, and plausible
(ideally, predictive) counterfactual conditionals.
There are a few (very few) examples where we already have those features, in respect of
correlations between brain events and consciousness. In general, however, they can't be
expected.
Even where we do have them, they leave the most difficult problem about conscious
experience untouched.
116
The Problem of Inner Speech and its relation to the Organiza-
tion of Conscious Experience: a Self-Regulation Model.
Robert Clowes
Centre for Research in Cognitive Science
Department of Informatics
Sussex University
Brighton BN1 9QH
East Sussex
UK
robertc@sussex.ac.uk
Abstract
This paper argues for the importance of inner speech in a proper understanding of the structure of
human conscious experience. It reviews one recent attempt to build a model of inner speech based
on a grammaticisation (Steels, 2003). The Steels model is compared with a self-regulation model
here proposed. This latter model is located within the broader literature on consciousness. I argue
the role of language in consciousness is not limited to checking the grammatical correctness of pro-
spective utterances, before they are spoken. Rather, it is more broadly activity structuring, regulat-
ing and shaping the ongoing structure of human activity in the world. Through linking inner speech
to the control of attention, I argue the study of the functional role of inner speech should be a central
area of analysis in our attempt to understand the development and qualitative character of human
consciousness.
1 Introduction
To introspection, for many of us, our men-
tal life seems to have a constant accompa-
niment of inner speech. This speech is
known in the literature under a number of
names such as; the inner voice, the internal
monologue, and is sometimes, subsumed
into (the more general) stream of con-
sciousness (James, 1890). It may also be
linked to the generally pejoratively associ-
ate notion of ‘voices in the head’. Under-
standing the nature of this phenomenon and
its functional underpinnings, although of
occasional interest in the history of psy-
chology, has, in the last few years drawn
the attention of many researchers into
mind. There is however, much controversy
about the precise nature of inner speech, its
epistemic status and possible functional
role.
Among psychologists, one means of ac-
counting for inner speech is Baddeley’s ar-
ticulatory loop (Baddeley & Hitch, 1974),
later rechristened the phonological loop
1
(Baddeley, 1997). This is considered to be
a speech related working memory system.
Among philosophers, the notion of inner
speech suggests privileged access to mental
states, and this, at least in the 20
th
century,
has invited great scepticism. The high-
water marks of this scepticism are probably
Ryle’s (1949) The Concept of Mind and
Dennett’s (1991) Consciousness Ex-
plained. Dennett’s view is complex on this
question for although he ultimately doubts
the strength of the epistemic warrant that
can be given to the narrative stream of con-
sciousness, and especially the subject’s
privileged position to report on its contents,
he nevertheless argues that the subject’s
self-reports should be our starting-point.
This is fundamental to his heterophenome-
nological method. This approach advocates
1
Presumably this re-naming has something to do
with thinking of inner speech as primarily an im-
aged sound, rather than unvoiced speech. The notion
of a phonological loop seems to focus on the phe-
nomenology of the passive, rather than active aspect
of inner speech.
117
we need to attempt to offer some explana-
tion of the importance attached to inner
speech in phenomenological accounts.
A window into the phenomenology of inner
speech is provided by Russell Hurlburt’s
Descriptive Experience Sampling technique
(1990). Hurlburt uses an experimental tech-
nique in which subjects are cued by a small
alarm device at various moments in their
day, and then following protocols devel-
oped by Hurlburt, write down the details of
their mental imagery at the moment that the
alarm went off. He argues this technique
allows us to systematically sample the
qualitative characteristics of reported phe-
nomenology
2
. It also allows us to describe
some of the characteristics of inner speech,
and inner imagery in general, in a much
more elaborated fashion.
The content and form of this reported inner
speech seems to be very diverse. Some
people report the perception of being the
author of voice-like inner speech; others, to
hearing voices offering advice or consola-
tion. Sometimes this voice appears to be
their own, and sometimes the voice of an-
other person. Some people report merely
having the sense of experiencing language-
like cognitive episodes without necessarily
hearing any voices or having the sense of
being the author of this speech. The variety
of this speech might serve as some justifi-
cation for the sceptics, or perhaps just evi-
dence of the complexity and variety of the
roles played by speech in our mental lives.
All of these phenomena seem to vary con-
siderably both across individuals, within
individuals at different times and places,
and with regard to whatever activities they
are at that moment engaged in. Hurlburt’s
2
Although the beeps themselves are random, statis-
tical techniques can be used to understand the distri-
butions of reported mental-events types and indeed
correlate them with other types of behavioural meas-
ures. (R. Hurlburt & Heavey, 2004)
work reveals much of the contents of con-
sciousness appear to be composed of
speech-like episodes. Except in cases of
severe psychological disturbance or other
abnormal functioning, the inner voice
seems to be the constant accompaniment of
human conscious life. But can we relate
these accounts of the contents of conscious
experience to language as vehicle?
Some recent accounts of cognitive role of
language have brought to the fore they way
that language may play a role, in sculpting,
stabilising, and supporting forms of thought
which would be otherwise impossible
(Carruthers, 2002; Clark, 2004). Trying to
forge a link theoretically between the phe-
nomenological and functional aspects of
inner-speech has proved so far a difficult
task, but it is one upon which some pro-
gress has now started to be made.
2 – A re-entrance model of inner
speech
Although traditional work on cognitive
modelling made much use of more-or-less
linguaform internal representations, follow-
ing (if sometimes implicitly) some version
of Fodor’s (1975) Language Of Thought
hypothesis, it has shied away from explic-
itly modelling the inner voice (cf. Dennett,
1994). Perhaps this is because of a worry
that the inner voice might be either an
epiphenomenon or user “illusion” (Dennett,
1991).
Recently work in machine consciousness
has begun to treat the phenomenon of inner
speech and its possible functional role more
directly (Steels, 2003). Steels’ earlier work
used individual-based models in multi-
agent systems to investigate the develop-
ment of collective lexicons. More recently
he has extended these models to attempt to
model syntax.
118
In Steels’ newer models agents are able to
check the intelligibility of their own sen-
tences by feeding back a prospective utter-
ance through their language interpretation
machinery prior to communication. Sys-
tems of agent with such re-entrant loops
appear to be able to self-organise more
complex grammars than would otherwise
be the case. (Steels, 2003, 2005) Re-
entrancy in Steels’ models serves the role
of checking the intelligibility of an utter-
ance in their own reception systems. Sys-
tems of such “self-talking” agents seem to
be able to achieve much more stable gram-
mars as a result.
It seems that in order to develop the abili-
ties to use complex syntax, re-entrant loops
may be necessary. Steels is thus able to
persuasively link re-entrancy to the genera-
tion of complex grammars in natural lan-
guage and perhaps thereby provide a func-
tional role for the inner-voice.
One problem for this work is that the eve-
ryday construction of grammatical sen-
tences is usually considered a largely un-
conscious activity. In fact, the construction
of grammatically correct sentences is often
given as the paradigmatic example of what
an unconscious cognitive process is like.
Thus, there seems a little prima facie im-
plausibility in correlating the phenomenol-
ogical inner voice with a mechanism whose
principle cognitive role is the construction
of grammatically correct utterances. While
Steels’ arguments about the role of re-
entrancy in the generation of complex
grammars are convincing, arguably how-
ever the link with the inner-voice is less
well-made.
One important caveat should be put on this
observation. Insofar as we are treating the
ontogenesis of language in young children,
and the problems of developing capabilities
to use a language for the first time, it may
very well be the case that a large portion of
the child’s cognitive resources taken up in
assembling and comprehending sentences
and possibly they are much more conscious
of this. It may turn out that the kinds of ac-
tivities that Steels models in his experi-
ments might very well turn out to play a
central role in the consciousness of young
children, and perhaps be the trailblazers for
more elaborate forms of conscious inner
loops to be developed later in their lives.
A further task is to establish links between
the Steels model and the account of the in-
ner voice posited by theorists seeking to
understand the re-organisation of cognition
by language? Arguably his account could
be made to fit with some of the recent ac-
counts of language-for-thought that rely on
the idea that language allows information to
be passed between modules which
wouldn’t otherwise connect (cf- Carruthers,
2002). As the Steels model seems to have
the language production and reception sys-
tem rather separated from other forms of
cognitive activity, it is difficult to say pre-
cisely how this relation could be estab-
lished. Yet if the development of grammar
turns out to be linked in this way to a re-
entrant cognitive architecture, one can
imagine how this architecture could be-
come appropriated by other cognitive func-
tions.
Although the Steels model offers an inter-
esting attempt to show the functional im-
portance of inner speech in order to stabi-
lise the learning of grammars of certain
complexity this model may be a special in-
stance of the more general case where self-
directed speech serves to scaffold and stabi-
lise a whole range of cognitive functions.
Yet could such a system also be linked to
the phenomenology of inner-speech and the
role of language in consciousness? More
work clearly needs to be done in order to
establish such a connection.
119
3 – A self-regulation model of in-
ner-speech
Recent research conducted with Tony
Morse (2005)
3
demonstrates how an alter-
native model of self-directed speech, still
based on re-entrancy, might relate the in-
ner-voice to a range of broader cognitive
activities. The starting assumption for this
work is that the cognitive role of language
is better understood as one of sculpting or
regulating cognitive activity rather than ex-
haustively representing the world (cf.
Clark, 1996). Inner speech could here be
seen as serving as a scaffold for developing
and sustaining cognitive functions beyond
the parsing and construction of meaningful
and grammatical utterances.
In our model we compare a series of possi-
ble architectures for minimal cognitive
agents which have to respond to instruc-
tions in order to fulfil externally indicated
goals, i.e. moving objects around in a
blocks world
4
. Our experiments compare
several types of agents with differing archi-
tectures, some with word re-entrant loops
and some without. All agents are imple-
mented with simple recurrent neural net-
works that are evolved with a genetic algo-
rithm in order to respond to commands by
performing tasks. Some of the agents have
architectures that allow the re-triggering of
command reception systems internally.
The cognitive architecture of the ‘re-
entrant’ agents is arranged such that they
can re-use the channels which are being
used to signal commands to them to re-
3
A much more detailed examination of this work is
now available in my unpublished DPhil thesis.
4
NB. This is not exactly a blocks-world in the tradi-
tional sense. Rather, agents have extensive sensori-
motor couplings with their limited world rather than
it being specified in a purely abstract way. The
agent architecture itself is an extension of an active
vision model reported in experiments by (Floreano,
Kato, Marocco, Sauser, & Suzuki, 2003)
trigger their own behaviours. These chan-
nels allow at least the possibility of estab-
lishing new control circuits that use the
same nodes that have previously been used
to receive input from external ‘words’. The
thought here is that if there is some advan-
tage to be had by re-using circuits devel-
oped to respond to words then the agents
will take advantage of this source of useful
adaptation. We find this is the case. Even
such minimal agents can take advantage of
these contingencies to develop word-based
modes of self-regulation.
We show that agents with these ‘re-entrant
speech’ capabilities (as illustrated in Fig-
ure 1) perform considerably better on cer-
tain tasks. This is explained in greater detail
in (Clowes & Morse, 2005). The basic find-
ing is that agents that have architectures
allowing the re-use of language for self-
regulation achieve higher levels of per-
formance more quickly and can stabilise
them for longer that those that do not.
Agents that are able to succeed in all task
conditions make considerable use of auto-
stimulation with words, i.e. they use re-
entrant word nodes to self-trigger.
Re-entrance does not function in our mod-
els to facilitate merely communicative suc-
cess or the generation and interpretation of
complex linguistic constructions, but in the
construction of more viable behaviours.
Words here are appropriated in a way that
is reminiscent of what Dennett calls auto-
stimulation but not as a complex self-
question (Dennett, 1991), but as new mode
of self-regulation. This work then supplies
at least a proof of concept that word-like
constructs can be appropriated from a role
in regulation from the outside (response to
a command) to internal regulation (the
agent self-regulating).
But linking such quite basic modes of auto-
stimulation with words to inner speech,
suggests a rather different picture of its un-
120
derlying nature to that suggested by the
Steels model. Inner speech is, I argue, the
phenomenological dimension of internal-
ised, word-based self regulation.
The phenomenological appearance of such
speech, as speech, depends on it playing a
similar attention focusing role as outer so-
cial speech often does. Further, I would
conjecture that it relies on the same neural
circuits, albeit appropriated for new self-
directed functions.
Figure 1 - The diagram shows an outline of the neural-architecture that is used in the experiments. The
salient aspect is that when a gating neuron is switched on, activity from the output of the network can be
fed back through the nodes that are used as input instructions. More detail on the architecture and some
tasks can be found in (Clowes & Morse, 2005). Agents evolved in these conditions develop elaborate self-
control loops and develop and stabilise solutions to more tasks than those that do not have such loops.
4 - A functional role for inner-
speech
Normal intersubjective speech can certainly
play a role in orienting attention, so why
not internal speech? A shout in the street
can cause an immediate refocusing of atten-
tion, e.g., hearing someone shout “mind the
car!” as you were about to cross the road,
would cause a fundamental reallocation of
your attention.
If the inner voice could similarly be linked
in some way to the allocation of attentional
resources then there is the possibility that it
may provide a window into the relationship
between higher cognition and conscious-
ness more generally. According to Vygot-
sky the internalisation of speech forms a
whole new mode of attentional re-
organisation.
Vygotsky (1986) emphasized the role of
language in the development of control of
action and ultimately of attention. His work
provides an interesting possible way into
the relationship between inner-speech and
consciousness by looking at it through a
developmental prism.
Vygotsky developed his ideas about the
internalisation of language in part as a cri-
tique of Piaget’s ideas about so-called ego-
centric speech. What Piaget called egocen-
tric speech, and developmentalists tend to
call today private speech, is a type of
speech that children produce between the
ages of about 4 and 7. It appears to be ad-
dressed toward the self and eventually
seems to disappear.
121
For Piaget this speech occurs toward the
end of his pre-operational stage and signi-
fieds a still undeveloped ability to take, or
imagine, the perspective of others. Social
speech was thought to be built form this
egoistic basis as children gain more experi-
ence that the point of view of others can be
different (especially through argument with
peers).
A longstanding controversy has arisen
amongst developmentalists about the pro-
venance and direction of this speech.
Whether it is ultimately a disappearing arte-
fact of early developmental egotism as Pia-
get argued in his early writings (1926), or
alternatively the establishment of the bridge
to linguistically controlled higher psycho-
logical function (Vygotsky, 1986 - origi-
nally 1934), either way this speech does not
seem to serve a standard communicative
function.
If Vygotsky’s theory is correct, then inner-
speech has at least its developmental pre-
cursors in this particular form of practically
oriented speech found in children. If more-
over inner-speech once fully internalised
could come to play a role in allocating at-
tention then this could provide a strong link
between the internalisation of language and
the constitution of human consciousness.
Understanding inner speech may yet prove
to be the royal road to understanding con-
sciousness.
5 – Self-Restructuring through in-
ternalisation
Much of the theoretical work arguing that
language plays a role in consciousness de-
pends on the idea that language reshapes
our underlying cognitive mechanisms in
some way. Exactly how and to what pur-
pose this functional re-organisation is a-
chieved is currently part of a lively debate.
The potential for re-using language as an
addition to the brain’s basic modes of or-
ganisation is something which is now start-
ing to be taken very seriously in the phi-
losophy of cognitive science (cf. Clark,
2004; Wheeler, 2004).
Dennett (1991) has argued that the devel-
opment of the self-questioning form of self-
directed speech is absolutely pivotal in the
construction of human consciousness and
its ability to sustain elaborate narrative
threads. His view on this seems linked to
his position that the form of human con-
sciousness is the effect of installing what he
calls a ‘serial virtual machine’ on parallel
processing hardware. A range of accounts
of the functional role of inner speech and
its relationship with consciousness have
also been put forward (Carruthers, 2002;
Clark, 1998; Frawley, 1997) which seek to
expand upon or restructure in various ways
the sort of picture developed by Dennett.
Although it seems possible that episodes of
inner speech are epiphenomenal and fulfil
no functional role in the organisation of
consciousness, it is certainly too early to
rule out the contrary possibility.
One can derive a further link between self-
directed speech and the functional structure
of consciousness from the psychopatho-
logical literature. Evidence seems compel-
ling that the collapse of a normal inner
voice in disorders such as ‘thought inser-
tion’ is often correlated with catastrophic
breakdowns for the organisation of individ-
ual consciousness (R. T. Hurlburt, 1993;
Stephens & Graham, 2000). Disorders such
as schizophrenia are sometimes theorised
as control disorders and this idea gives us a
way into establishing a possible link with
the functional role of internalised speech
(Gallagher, 2000). It points towards some
quite central role for self-directed speech in
the organisation of human consciousness, if
not necessarily along the lines of Dennett’s
model.
122
One difficulty with this idea is that it is still
very unclear at the level of sub-personal
cognitive architecture how language can
come to play the types of roles that are be-
ing ascribed it by the consciousness theo-
rists. Yet there is a dearth of cognitive
models that even attempt to show how such
a reorganisation might happen
5
. However,
it is possible to further analyse the model
described above to give some insight into
how attentional control through language
internalisation might be established.
The model presented here gives one sug-
gestion as to how the sorts of complex
modes of self-regulation that seem bound
up with human consciousness can get un-
derway.
The simulation work with minimal cogni-
tive agents shows that the re-use of public
symbols in re-organising the ongoing ac-
tivities of self can have cognitive benefits.
These appear to go beyond being able to
interpret and sustain more complex lan-
guages. Rather the internalisation of lan-
guage in these models has more to do with
the restructuring ongoing situated action.
Analysing the models further we found that
the development of the ability to re-use a
system of commands appears to move
through broadly three control regimes.
1. Agents develop the capacity to re-
spond to instructions. At this stage
of development agents might be de-
cribed as passive and do not use
self-directed instructions very
much.
5
Despite these lacuna in more general work on cog-
nitive modelling and the role of language some in-
teresting work linking linguistic and cognitive func-
tion is starting to be done (Sugita & Tani, 2002).
This work however encompasses quite a distinct
formulation of the idea of a role for language in
cognition as does the work reported here.
2. Agents start to auto-stimulate with
instruction nodes. This regime of
self-control tends to produce inef-
fective and unstable systems of ac-
tivity, (e.g. agents can sometimes
perform the tasks well but very of-
ten do not).
3. Finally agents develop much more
robust forms of self-control that rely
on the ability to use new regimes of
action made available by the self-
directed loops.
Can these results be linked with Vygotsky’s
ideas about the establishment of new re-
gimes of self-control through the internali-
sation of speech?
Vygotsky – to some extent developing the
ideas of the Gestaltists
6
- argued that the
development of self-directed speech was an
form of self-prompting by which children
come to de-centre and move themselves
from one domain of situated activity (or as
he might have termed it practical thought)
to another. He saw this development as be-
ing centrally involved in the establishment
of self-control and attention-regulation that
are characteristic of human consciousness.
The work discussed above gives us a possi-
ble way of understanding the neural-
dynamics underlying the establishment of
this linguistic self-regulation.
6 – Inner speech and the modelling
of consciousness
Notwithstanding current attempts to de-
velop work in synthetic phenomenology
(Chrisley & Holland, 1994), for now
7
, hu-
6
Gestalt psychologists wrote a great deal on the
problem of insight and how it was that a problem
might suddenly be restructured such that it appears
in an entirely new way. Kohler was one that held
that tools could play a role
7
Perhaps forever, cf, (Nagel, 1974)
123
man consciousness is the only type of con-
sciousness which we know intimately. It
seems unlikely that we can afford to ignore
the relevance of the role of language in at-
tempts to model it in machines, not to men-
tion the project of building actually con-
scious machines.
Theorists as diverse and as historically dis-
tant as Vygotsky and Dennett have argued
that self-directed speech plays a central role
in the organisation and even the construc-
tion of human conscious experience. Work
by Hurlburt and others appears to show that
conscious experience abounds with epi-
sodes of internal speech.
If they are right and we are serious in our
attempts to understand human conscious-
ness with synthetic techniques, then we
need to develop more advanced and explicit
models of the role language might play in
its functional organisation. The hypothesis
defended here about the functional role of
internalised speech is that it is a tool for the
focusing or re-focusing of attentional re-
sources.
Inner speech then appears to be of central
importance because it gives an agent the
capacity to restructure not just the external
world but also itself. External activity in
this way becomes redeployed toward inner
restructuring. Simulation models such as
those discussed above give us a unique
mode of developing an understanding of
the functional changes that underlie such a
transition.
This internalisation model of self-directed
speech can be used to provide an explana-
tion of how language plays a role in creat-
ing the regimes of complex self-control and
attention-regulation that are central to the
sorts of consciousness that humans have (cf
Donald, 2001). It does not attempt to ad-
dress the question of why any experiences
are conscious at all. However, it may allow
us a new vantage point on their qualitative
character.
According to the sensorimotor approach or
‘skill theory’ of conscious experience, “ex-
perience is not something we feel but
something we do” (O'Regan, 2001). The
character of perceptual experience, accord-
ing to this theory, is given in the mastery of
sensorimotor contingencies. These contin-
gencies of self have there own governing
laws just as any other complex physical
system. Developing a mastery of these laws
through autostimulation-with-words might
be considered akin to the development of a
new perceptual modality.
This mastery of the mechanisms of auto-
stimulation-with-words affords the refocus-
ing of one’s own attention on self. This ex-
ercise of the contingencies of self can
therefore be linked, more generally, to the
qualitative analysis of consciousness in
terms of sensorimotor contingencies (cf -
O'Regan & Noë, 2001). Understanding this
refocusing of attention might help us ex-
plain the uniquely human mode of the
self’s perceptual presence.
References
Baddeley, A. (1997). Human Memory The-
ory and Practice. Hove, UK: Psy-
chology Press.
Baddeley, A., & Hitch, G. (1974). Working
Memory. In G. A. Bower (Ed.), Re-
cent advances in the psychology of
learning and motivation. New
York: Academic Press.
Carruthers, P. (2002). The Cognitive Func-
tion of Language. Behavioral and
Brain Sciences, 25(6).
Chrisley, R., & Holland, A. (1994). Con-
nectionist synthetic epistemology:
Requirements for the development
of objectivity (No. 353): COGS
CSRP 353.
124
Clark, A. (1996). Linguistic Anchors in the
Sea of Thought? Pragmatics And
Cognition, 4(1), 93-103.
Clark, A. (1998). Magic Words: How Lan-
guage Augments Human Computa-
tion. In P. Carruthers & J. Boucher
(Eds.), Language and Thought. In-
terdisciplary Themes (pp. 162 -
183). Oxford: Oxford University
Press.
Clark, A. (2004). Is language special?
Some remarks on control, coding,
and co-ordination. Language Sci-
ences, 26(6), 717-726.
Clowes, R. W., & Morse, A. (2005). Scaf-
folding Cognition with Words. In L.
Berthouze, F. Kaplan, H. Kozima,
Y. Yano, J. Konczak, G. Metta, J.
Nadel, G. Sandini, G. Stojanov &
C. Balkenius (Eds.), Proceedings of
the 5th International Workshop on
Epigenetic Robotics. Nara, Japan:
Lund University Cognitive Studies,
123. Lund: LUCS.
Dennett, D. C. (1991). Consciousness Ex-
plained: Penguin Books.
Dennett, D. C. (1994). The Role of Lan-
guage in Intelligence. In D. C. Den-
nett (Ed.), What is Intelligence.
Cambridge: Cambridge University
Press.
Donald, M. (2001). A Mind So Rare: The
Evolution of Human Consciousness.
New York / London: W. W. Norton
& Company.
Floreano, D., Kato, T., Marocco, D.,
Sauser, E., & Suzuki, M. (2003).
Active Vision & Feature Selection:
Co-development of active vision
control and receptive field forma-
tion. Complex visual performance
with simple neural structures. Re-
trieved 30 June 2004
Fodor, J. (1975). The Language of Thought.
New York: MIT Press.
Frawley, W. (1997). Vygotsky and Cogni-
tive Science: Language and the
Unification of the Social and Com-
putationional Mind. Cambridge:
Harvard University.
Gallagher, S. (2000). Philosophical concep-
tions of the self: implications for
cognitive science. Trends in Cogni-
tive Sciences.
Hurlburt, R., & Heavey, C. L. (2004). To
Beep or Not To Beep: Obtaining
Accurate Reports About Awareness.
Journal of Consciousness Studies,
11(7), 113-128.
Hurlburt, R. T. (1990). Sampling Normal
and Schizophrenic Inner Experi-
ence. New York: Plenem Press.
Hurlburt, R. T. (1993). Sampling inner ex-
perience with disturbed affect.: Ple-
num Press.
James, W. (1890). The Principles of Psy-
chology.
Nagel, T. (1974). What is it like to be a
bat? Philosophical Review, 83, 435-
450.
O'Regan, J. K. (2001). Experience in not
something we feel but something
we do: a principled way of explain-
ing sensory phenomenology, with
Change Blindness and other empiri-
cal consequences.
O'Regan, J. K., & Noë, A. (2001). A sen-
sorimotor account of vision and
visual consciousness. Behavioral
and Brain Sciences, 24.
Piaget, J. (1926). The Language and
Thought of the Child: Routledge
and Kegan Paul.
Ryle, G. (1949). The Concept of Mind.
Chicago: The University of Chicago
Press.
Steels, L. (2003). Language Re-Entrance
and the 'Inner Voice". In O. Holland
(Ed.), Machine Consciousness. Exe-
ter: Imprint.
Steels, L. (2005). Constructivist Develop-
ment of Grounded Construction
Grammars.
Stephens, G. L., & Graham, G. (2000).
When Self-Consciousness Breaks:
MIT Press.
125
Sugita, Y., & Tani, J. (2002). A connec-
tionist model which unifies the be-
havioral and the linguistic proc-
esses. In M. I. Stamenov & V.
Gallese (Eds.), Mirror Neurons and
the Evolution of the Brain (Vol. 42).
Vygotsky, L. S. (1986). Thought and Lan-
guage (Seventh Printing ed.): MIT
Press.
Wheeler, M. (2004). Is language the ulti-
mate artefact? Language Sciences,
26(6), 688-710.
126
Playing to be Mindful
(Remedies for Chronic Boxology)
Ezequiel Di Paolo
Centre for Computational Neuroscience and Robotics
University of Sussex,
Falmer, Brighton, Sussex BN1 9QH, UK
ezequiel@sussex.ac.uk
Abstract
There is a widespread misconception among critics of the dynamical systems approach to
cognition: the emphasis on embodiment and situatedness has given the wrong impression that the
only cognitive activities that can be explained under this paradigm are those concerned with
ongoing coping with the current situation. To say that the body is actively situated in a world is
only to highlight the most fundamental aspect of all cognitive activity. There is no doubt that the
dynamical systems approach has already proven immensely more successful in such cases than
traditional computational approaches. Even so, as soon as we move to other, more human,
cognitive performances, such as planning or imagining, we must, critics predict, return to the
tenets of cognitivism/ computationalism in some updated form, or worse still, to some kind of
hybrid stance. Here I briefly examine the foundations of this claim (and find there aren't really
any).
On the positive side, I raise the issue of what is the best route for connecting sensorimotor and
situated intelligence with (some) human styles of cognitive activity (misleadingly characterized as
"decoupled"). A dynamical systems approach is already useful because it forces us to formulate
the questions that traditional representational approaches felt unnecessary to ask since they
answered them almost axiomatically. What is to represent? How is it possible to alter the meaning
of a situation? What sort of system is a cognizer such that the world is meaningful for