ArticlePDF Available

Special issue on machine consciousness: Self, integration and explanation | selected papers from the 2011 aisb workshop: Guest editors' introduction

International Journal of Machine Consciousness
© World Scientific Publishing Company
Ron Chrisley
Sackler Centre for Conscousness Science, Centre for Research In Cognitive Science, and
Department of Informatics, University of Sussex, Brighton BN1 9QJ, United Kingdom
Robert W. Clowes
Instituto de Filosofia da Linguagem , Universidade Nova de Lisboa, Av. de Berna, 26 - 4º piso ,
Lisbon, 1069-061, Portugal
1. Background
In April of 2011, a symposium, entitled “Machine Consciousness 2011: Self, Integration
and Explanation” (MC2011) was held in conjunction with the conference Artificial
Intelligence and the Simulation of Behaviour at the University of York, UK. The
meeting, organized by Steve Torrance and ourselves, built upon previous AISB
conferences in 2005 and 2006 (and an issue of the Journal of Consciousness Studies
collating work originally presented in these conferences), sought to look at how the field
of machine consciousness is moving forward, what distinctive contributions it has made
so far, and by focusing on more intensive scrutiny of work so far presented provide a
forum for proposing and considering future developments in the area.
Although the field of machine consciousness is still in the process of maturation, there is
evidence of its increasing importance with respect to the broader fields of consciousness
studies, artificial intelligence and philosophy of mind. There is, of course, a new journal
dedicated to the field: The International Journal of Machine Consciousness itself. But
there are also now several well-developed research programmes in the area, and work in
the area is increasingly cited by the wider community suggesting widening interest. For
example, a recent textbook introduction to consciousness (Susan Blackmore's
Consciousness: An Introduction, 2nd Edition) gives the topic an extended, three-chapter
treatment. Yet we still do not have anything approaching a general agreement on the best
way forward, or much work assessing the current strengths and weaknesses of the various
programmes. all of which have different methodologies and success criteria.
Nevertheless, there are reasons to think that the field is at the point where it could offer
2 R. Chrisley and R. Clowes
important contributions to both the understanding of mind in general and consciousness
in particular.
For these reasons we organized MC2011, asking for researchers to submit work that
suggests how the various approaches to machine consciousness might be deepened, how
their contributions might be assessed and extended; and how they might better develop
through stronger relationships with related fields. This issue collects together
devlopments of selected work presented at this meeting.
2. Focus questions
The meeting interrogated some of the themes central to recent research in machine
consciousness that admit of a more sustained consideration. In our call for papers, we
encouraged submissions falling under one of more of these topics, addressing one or
more of the following questions:
Self modelling: What is the relevance of self-modelling to machine consciousness?
What is a self-model and how might it be implemented? The notion seems to inspire
many in the field but is the idea clear? What types of self-modelling might there be?
How does self-modelling relate to previous problems of representationalism? Do we
need to build embodied self-models? Might research into machine consciousness
along these lines help deal with some problems faced by behaviour-based
Information integration: What implications do information integration theories have
machine consciousness? What specific aspects of consciousness might be best
explained by these approaches? Can implemented models shed light on the ways
information might be integrated?
Explanation: What new explanatory resources does the field of machine
consciousness offer? Has the field made good on helping us explain consciousness? If
so, which aspects, models or approaches seem to offer the most value? What aspects
of consciousness might admit of modelling approaches? Do computational models of
consciousness deserve to be described as “machine consciousness” any more than
computational models of weather deserve to be described as “machine weather”?
Neuroscience: What should the relation be between research in machine
consciousness and research in neuroscience or in consciousness science in general?
Can machine consciousness models have any scientific validity unless they are
closely allied to detailed findings and models in current neuroscience?
Functional versus phenomenal consciousness: Is machine consciousness research a
sub-field of artificial intelligence? Is the phenomenology of consciousness fully
Introduction 3
analysable in terms of cognitive functionalities? Would an AI agent whose
intelligence ranked higher than most humans or indeed whose intelligence was far
higher than any human’s (cf the recent notions of AI+ and AI++ [Chalmers 2010])
necessarily also be an artificially conscious agent?
Ethics: Does machine consciousness research specifically raise any distinctive
ethical problems? What bearing might MC research have on the development of
artificial agents with ethical responsibilities or duties towards us? Or on the
development of artificial agents towards whom we might have ethical
Of course, not all of these questions were addressed (nor could they have been), but we
were (and are) very pleased with not only the quality, but especially the targeted
relevance of the submissions we received.
3. The articles in this issue
3.1. Self System in a Model of Cognition (Ramamurthy, Franklin, & Agrawal)
Franklin and colleagesLIDA (Learning Intelligent Distribution Agent) - based on the
Global Workspace Theory, and an extension of the original IDA model [Franklin, 2003] -
is one of the most developed implemented models currently being developed to
understand and explain consciousness [B. J. Baars & Franklin, 2009]. The article in this
volume Ramamurthy, Franklin, & Agrawal [2012] adds a self-model system to the LIDA
architecture. The paper revises the previous work presented at the AISB symposium
[Ramamurthy & Franklin, 2011] primarily to accommodate some rethinking of the LIDA
model itself [McCall, Franklin, & Friedlander, 2010].
The actual theoretical inspiration of the model of self here presented is based around
Damasio’s [2000] interlinked notions of Proto Self, Core Self and Extended Self. The
explanatory project, although not made completely explicit, appears to be to show that
within the LIDA system (and thus Global Workspace Theory) it is possible to integrate a
theoretically motivated model of self. The broad aim of the paper, like its predecessor,
remains to produce a model of self which fits into the larger LIDA project to build a
complete implementation of the cognitive system underlying functional consciousness. It
is shown that a Damasio inspired self-system can be articulated within the LIDA
But whether this computational compatability demonstrates the compatibility of these
theories at a deeper level is open to question. It is controversial since Damasio defined
proto-consciousness as “a coherent collection of neural patterns which map, moment by
moment, the state of the physical structure of the organism in its many
dimensions[Damasio, 2000, p.154]. Although the self part of the model is theoretically
based around Damasio’s work, this implementation of the LIDA system has no body; or
rather its embodiment is restricted to a computational model. (See Ziemke [2007] and
4 R. Chrisley and R. Clowes
Torrance [2007] for more on this controversy within the machine consciousness
community.) Arguably a model that had a more convincing or interesting embodiment
would be required to really develop the implications of theoretical compatibility or
further demonstrate the constraints of these theories.
3.2. A Cognitive Neuroscience-inspired Codelet-based Cognitive Architecture for
the Control of Artificial Creatures with Incremental Levels of Machine
Consciousness.( Raizer, Paraense & Gudwin)
The Raizer, Paraense, & Gudwin paper [this issue] outlines an ambitious programme of
research to cumulatively build an architecture for a conscious robot. The way in which
the cognitive model itself is being constructed (and conceptually organised) closely
follows the idea of the triune brain [MacLean, 1990] which posits that the (human) brain
can be understood as a three levelled organ that can be conceptually separated into
reptilian, paleomammalian and neomammalian components. The idea also implies that
much of the brains functional architecture can similarly be conceptualised in a tripartite
way. The paper closely follows the Baars / Franklin approach in particular by by
following the global workspace architecture implemented with codelets [Hofstadter &
Mitchell, 1994; Negatu & Franklin, 2002]. The project however uses this architecture to
control a humanoid robot with a complex embodiment: the iCub [Metta, Sandini, Vernon,
Natale, & Nori, 2008] platform. The paper then assesses the various levels of the
cognitive architecture with reference to the current state of their project in terms of the
Arrabales pragmatic consciousness scale [Arrabales, Ledezma, & ConsScale, 2010].
3.3. Synthetic phenomenology and the high-dimensional buffer hypothesis
(Chella and Gaglio)
In Synthetic phenomenology and the high-dimensional buffer hypothesis” [this issue],
Antonio Chella and Salvatore Gaglio detail a part of their robot architecture which they
call the "high-dimensional buffer". Synthetic phenomenology is concerned with both the
characterization of phenomenal states possessed or modelled by a robotand the use of
a robot to help specifying phenomenal states.” Chella and Gaglio point out that the work
done in this field to date has typically involved perceptual signals of low dimensionality.
It is their contention that this has limited the field’s explanatory scope. Instead, they
propose the use of a high-dimensional perceptual signal, and show how it can be used to
capture a multi-faceted phenomenal profile, including subconceptual, iconic and
conceptual experience. Rather than creating computational intractability, they contend
that such a high-dimensional space could make it easier for the system to find
representational configurations which best make sense of the world.
Introduction 5
3.4. Inner Speech Generation In a Video Game Non Player Character: From
Explanation to Self? (Arrabales)
What if any is the functional role of inner speech in consciousness? Recent theoretical
research in cognitive science has suggested that there might be some important cognitive
benefits to agents capable of using language for communication, to integrating that
language in other part of their cognition architecture [Carruthers, 2002; Clark, 1998,
2005]*. A small body of work in simulation has attempted to put this to the test [Clowes
& Morse, 2005; Mirolli & Parisi, 2011]. There has been much less work on the role of
language in consciousness, although even here researchers have already suggested
approaches to this that may indicate interesting scope for modelling based approaches
[Clowes, 2007; Haikonen, 2006]. Arrabales [this issue] attempts a much more scaled-up
approach to this territory by embedding an architecture for inner speech in an existing
architecture for non-player characters AIs in a video-game system.
The game scenario employed is a ‘deathmatch’ style first person shoot-en-up, which
as the author’s admit may not be the most obvious scenario for exploring inner speech
but at least makes sure that any AI needs to deal with some issues of embodiment /
embedding in a non-trivial world. The AI architecture is Arrabales own CERA-
CRANIUM [Arrabales, Ledezma, & Sanchis, 2009] software framework, principally
intended to enable the construction of machine consciousness inspired controllers for
video games. CERA-CRANIUM was developed for building controllers for these non-
player characters and is modelled on some of the more important theoretical approaches
to consciousness: Baars Global Workspace Theory [B. Baars, 1997] and Dennett’s
Multiple Draft’s theory [Dennett, 1991].
This work really extends the CERA-CRANIUM architecture with a component that
translates the mission-level description of the system into a simplified natural language.
One of the great advantages of such an approach is that it makes it possible to explore the
trade-offs and interactions between different representation systems (say vision and
speech) in a single system with a potential sophistication not so far attempted. In more
technical detail, the system produces summaries of a given scene and then use a parse
tree mechanism to turn the mission level descriptions into “inner speech”. At the moment
these mechanisms neither play a role in the organisation of the system nor are used
communicatively - they deliver a narrative like ongoing description of the scene and so
strictly there are questions about whether this should be seen as inner speech rather than
progress toward implementing such. Nevertheless the inclusion of this summarising
function into the CERA-CRANIUM architecture is a step toward the testing of various
hypotheses about inner speech in a video games setting and will hopefully allow the
possibility of exploring a number of particularly hypotheses about the role of inner
speech in consciousness. This work presents an interesting step along that path.
* NB Such work can be seen as a departure from a traditional view in classical cognitive science that saw all
thought as already being couched in a linguaform Language of Thought [Fodor, 1975]
But see [Martinez-Manrique & Vicente, 2010; Morin, 2005] for some relevant recent treatments.
But see [Martinez-Manrique & Vicente, 2010; Morin, 2005] for some relevant recent treatments.
6 R. Chrisley and R. Clowes
3.5. Consciousness, action selection, meaning and phenomenic anticipation
(Sanz, Hernandez & Sanchez-Escribano)
Sanz, Hernandez, & Sanchez-Escribano [this issue] use the notion of agent
phenomenality to guide the specification of a general archictecture for consciousness.
The authors hold that phenomenal states are the ultimate sources of motivation in humans
and possibly other animals and seek to develop robot architectures with the same
The article reports on the ASys research program which attempts to develop a
universal technology for autonomy” the object of which is to specify and build robot
architectures which can be used to robustly control autonomous systems on platforms as
diverse as cars and pacemakers. The primary motivation is that control architectures
which reflect conscious minds are thought to allow the construction of more robust and
adequate control systems.
On the more theoretical side rather than trying to build machines which have one or
other aspects of consciousness, Sanz et al believe that we need to design a general
architecture including all of the major sub-systems of consciousness in a single system. In
this their work has similar motivation to some of the other major machine consciousness
programmes [Aleksander & Morton, 2007; Franklin, 2003]) that seek to understand
whole conscious systems. Apart from attempting to build more robust control
architectures, Sanz’s programme seeks to use consciousness research to develop design
patterns for building better robot minds in the future.
3.6. The computational stance is unfit for consciousness (Manzotti)
Central to much work in machine consciousness is the assumption that computation, in its
traditional formal or functional sense, is central to consciousness in that sufficient, or at
least necessary, conditions for consciousness can be stated in computational terms.
Riccardo Manzotti and David Gamez each cast doubt on this assumption. In “The
computational stance is unfit for consciousness [this issue], Manzotti argues that
computation is not the right kind of interpretation-independent entity that is required for
something to serve as a metaphysical foundation for conscious mentality. Similarly, the
related notion of information is observer-relative in a way that makes it unsuitable for
explaining the place of consciousness in the physical world.
3.7. Empirically grounded claims about consciousness in computers (Gamez)
Gamez, in “Empirically grounded claims about consciousness in computers [this issue],
takes a similarly anti-computational stance. His point is similar to John Searle’s in that
he does not claim that computers cannot be conscious; rather, the contention is that if a
computer were conscious, it would not be because of implementing this or that formal,
functional characterization. Rather, it would be because it producesspatiotemporal
patterns in the physical world that match those that are potentially linked with
consciousness in the human brain.He then goes on to describe how we, armed with the
Introduction 7
uncontestably conscious adult human case as a starting point, might go about
determining the best way to identify and express those correlates. He closes his
discussion by warning that while his method might be used to demonstrate the presence
of consciousness in a system, it could not be used to prove that a given system is not
3.8. World-related integrated information: Enactivist and phenomenal
perspectives (Beaton and Aleksander)
One of the possible correlates Gamez considers is the mathematical notion of information
integration; Mike Beaton and Igor Aleksander, in “World-related integrated information:
Enactivist and phenomenal perspectives” [this issue] evaluate this notion’s ability to
explain consciousness. Information integration is a measure that, it has been proposed,
can capture two important aspects of conscious states: that they are highly
discriminative and yet fundamentally integrated”. In the first part of their paper, Beaton
and Alexander offer what they take to be an enactivist critique of extant proposals for
explaining consciousness in terms of informational integration. The relevant notion of
information, they contend, is information for a whole, embodied, situated, rational
subject, not information attributed by a third party to sub-personal neural states. Thus
information integration theory, as it stands, is fundamentally on the wrong track. In the
second part of the paper, they contend that even if the enactivist critique is set aside,
problems remain for the information integration approach in its current, non-world-
involving form. They analyse at two simulated perceptual systems that, although
equivalent in terms of information integration, differ in that one successfully represents
the sensed external world, while the other does not. They argue that this difference in
representational success manifests a striking difference in phenomenality that therefore
cannot be captured in standard information integration terms.
3.9. Can functional and phenomenal consciousness be divided? (Taylor)
John Taylor's paper addresses the question in his title, "Can functional and phenomenal
consciousness be divided?" [this issue] in a way that simultaneously attempts to
understand consciousness in a causal (and thus evolvable) context, and do justice to a
phenomenally robust notion of subjectivity. He argues that the primary function of
consciousness is to improve attentional control (in terms of both accuracy and speed), and
that one can therefore locate the crucial mechanisms underlying consciousness in the last
of a five-stage evolvable model of attention. Central to this last stage is corollary
discharge (what many readers may know better as "efference copying"), and it is this
which Taylor argues underlies the sense of self, particularly the ownership of
experiences. He cites experimental evidence for the existence of this corollary discharge
in conditions which support the explanatory role he proposes for it.
8 R. Chrisley and R. Clowes
3.10. A role for consciousness in action selection (Bryson)
Although Joanna Bryson agrees with Taylor on several general points, in "A role for
consciousness in action selection" [this issue] she makes a different connection between
attention and consciousness. Relying on ethological results as much as her own expertise
in computing and artificial intelligence, she argues that in many cases, action selection
does not require consciousness. It is only in cases where, e.g., a system must be capable
of attending to subtle distinctions in order to be able to learn what action to take, that
consciousness is worth its cost, and therefore is present. She proposes that machines will
find useful the same heuristics for allocating attention in these cases as animals do:
"allocate attention on the actions you actually perform, and for a time in proportion to
your uncertainty about your next action." Bryson does not stop here, however, instead
making some remarks concerning the ethical dimension of consciousness. Taking a
position strikingly different from, e.g., that of Torrance (see below), she contends that
"only the conscious can be moral agents, but that does not necessarily imply that all
conscious entities must be treated as moral agents." In fact, "we are obliged when we
make intelligent machines to make ones we are not obliged to."
3.11. Super-intelligence and (Super-)Consciousness (Torrance)
Does high intelligence necessarily imply consciousness? Steve Torrance’s paper [this
issue] discusses the possibilities of hyperintelligence (AI++) - the idea that there might be
a being whose intelligence exceeds human intelligence in the way that human’s exceed
mice and its implications for the study of consciousness. Part of the argument for AI++
stems from Chalmers [2010] consideration of ‘The Singularity’ where he argues that
were we to build an AI that approached or exceeded human intelligence, that machine
could design more intelligent sucessors whose intelligence, leading to a sequence of ever-
more intelligent machines whose intellectual prowess could soon far exceed our own.
This potentially runaway intelligence leads Torrance to what he calls the “Drop-Out
Question” (DOC), namely, would machines that were as or more intelligent than us
necessarily be conscious.
Several MC researchers (especially perhaps Arrabales [2010]) imply as much. The
main part of Torrance’s paper explores a series of different positions on the DOC and
how they have played out in the literature. They range from various sorts of scepticism
(hard, soft and agnosticism) to positions that AI++ would be sufficient for both functional
and even phenomenal consciousness.
As the paper argues, consideration of the DOC has interesting implications for
machine consciousness research more generally. If AI++ turns out to be sufficient for
Consciousness then MC research turns out to be central to artificial intelligence rather
than some odd or radical wing of it).
Torrance further argues that because phenomenal consciousness at least important has
ethical implications, that some answers to the DOP imply that AI itself has important
Introduction 9
ethical implications. Torrance makes the case for further exploration of the linkages and
implied relations between artificial intelligence and machine consciousness.
3.12. Virtualist representation (Clowes and Chrisley)
It must be admitted that we took some liberties in submitting our own paper, "Virtualist
representation" for inclusion in this special issue. Unlike the other papers, it was not
actually presented at the 2012 meeting, but is rather based on developments of work that
we have presented at similar AISB machine consciousness symposia in 2005 and 2006.
We start by noting the prevalence, in recent machine consciousness research and beyond,
of the virtual reality metaphor for consciousness. We then attempt to steer a middle
course between two versions of this metaphor - one ("presentational") which, at its worst,
suffers from the same problems as indirect and snapshot theories of perception, and
another ("enactive") which eschews representation entirely, to its own detriment. We
sketch an expectational theory of consciousness that involves representations (giving it an
edge over enactive virtualism) that are truly virtual, in that they need not be occurrently
tokened in order to contribute to the content of experience (thus avoiding the problems
confronting typical presentationalist accounts). Expectations are, on this account,
essentially predictive, and are therefore related to the kind of architecture proposed by
Bryson, as well as the recent sub-field of predictive coding.
4. Acknowledgements
Robert Clowes would like to gratefully acknowledge a Portuguese Science
Foundation Grant /BPD/70440/2010 that supported the writing of this introduction. We
would also like to thank Steve Torrance for helping organize the meeting on which this
issue is based, John Taylor’s widow Pamela, for her kind permission to include his paper,
Jenny Prince Chrisley for assisting with the editing and preparation of this issue, and the
editor-in-chief of this journal for his support and endless patience.
Aleksander, I., & Morton, H. (2007). Why axiomatic models of being conscious? Journal of
Consciousness Studies, Journal of consciousness studies special issue on machine
Arrabales, R. (this issue). CERA-CRANIUM: A test bed for machine consciousness research.
International Journal of Machine Consciousness.
Arrabales, R., Ledezma, A., & ConsScale, A. (2010). A Pragmatic Scale for Measuring the Level
of Consciousness in Artificial Agents. Journal of Consciousness Studies, 17, 3(4), 131-
Arrabales, R., Ledezma, A., & Sanchis, A. (2009). CERA-CRANIUM: A test bed for machine
consciousness research. Paper presented at the International Workshop on Machine
Consciousness 2009.
Baars, B. (1997). In the theatre of consciousness. Global Workspace Theory, a rigorous scientific
10 R. Chrisley and R. Clowes
theory of consciousness. Journal of Consciousness Studies, 4(4), 292-309.
Baars, B. J., & Franklin, S. (2009). Consciousness is computational: The Lida model of global
workspace theory. International Journal of Machine Consciousness, 1(01), 23-.
Beaton, M., & Aleksander, I. (this issue). World-related integrated information: Enactivist and
phenomenal perspectives. International Journal of Machine Consciousness.
Bryson, J. (this issue). A role for consciousness in action selection. International Journal of
Machine Consciousness.
Carruthers, P. (2002). The cognitive function of language. Behavioral and Brain Sciences, 25(06),
Chalmers, D. (2010). The Singularity: A philosophical analysis. Journal of Consciousness Studies,
17, 9(10), 7-65.
Chella, A., & Gaglio, S. (this issue). Synthetic phenomenology and the high-dimensional Buffer
Chrisley, R., & Parthemore, J. (2007). Synthetic phenomenology: Exploiting embodiment to
specify the non-conceptual content of visual experience. Journal of Consciousness
Studies, 14(7), 44-58.
Clark, A. (1998). Magic Words: How Language Augments Human Computation. In P. Carruthers
& J. Boucher (Eds.), Language and Thought. Interdisciplary Themes (pp. 162 - 183).
Oxford: Oxford University Press.
Clark, A. (2005). Material Symbols: From Translation to Co-ordination in the constitution of
thought and reason. Paper presented at the Cognitive Science Society Conference, Las
Strega, Italy.
Clowes, R. W. (2007). A Self-Regulation Model of Inner Speech and its Role in the Organisation
of Human Conscious Experience. Journal of Consciousness Studies, 14(7), 59-71.
Clowes, R. W., & Chrisley, R. (this issue). Virtualist Representation. International Journal of
Machine Consciousness.
Clowes, R. W., & Morse, A. (2005). Scaffolding Cognition with Words. In L. Berthouze, F.
Kaplan, H. Kozima, Y. Yano, J. Konczak, G. Metta, J. Nadel, G. Sandini, G. Stojanov &
C. Balkenius (Eds.), Proceedings of the 5th International Workshop on Epigenetic
Robotics (pp. 102-105). Nara, Japan: Lund University Cognitive Studies, 123. Lund:
Damasio, A. R. (2000). The Feeling of What Happens: body, emotion and the making of
consciousness: Vintage.
Dennett, D. C. (1991). Consciousness Explained: Penguin Books.
Fodor, J. (1975). The Language of Thought. Cambridge, MA: MIT Press.
Franklin, S. (2003). A Conscious Artifact? Journal of Consciousness Studies, 10(4), 47-66.
Gamez, D. (this issue). Empirically grounded claims about consciousness in computers.
International Journal of Machine Consciousness.
Haikonen, P. (2006). Towards Streams of Consciousness; Implementing Inner Speech. In R.
Chrisley, R. W. Clowes & S. Torrance (Eds.), Proceedings of AISB06 Symposium on
Integrative Approaches to Machine Consciousness.
Hofstadter, D., & Mitchell, M. (1994). The copycat project. Analogical Connections, Adv. in
Connectionist and Neural Computation Theory, 2.
MacLean, P. D. (1990). The triune brain in evolution: Role in paleocerebral functions: Springer.
Manzotti, R. (this issue). The computational stance is unfit for consciousness. International Journal
of Machine Consciousness.
Introduction 11
Martinez-Manrique, F., & Vicente, A. (2010). What the! The role of inner speech in conscious
thought. Journal of Consciousness Studies, 17(9-10), 9-10.
McCall, R., Franklin, S., & Friedlander, D. (2010). Grounded Event-Based and Modal
Representations for Objects, Relations, Beliefs, Etc. FLAIRS-23, Daytona Beach, FL.
Metta, G., Sandini, G., Vernon, D., Natale, L., & Nori, F. (2008). The iCub humanoid robot: an
open platform for research in embodied cognition. Paper presented at the Proceedings of
the 8th Workshop on Performance Metrics for Intelligent Systems.
Mirolli, M., & Parisi, D. (2011). Towards a Vygotskyan cognitive robotics: the role of language as
a cognitive tool. New Ideas in Psychology, 29(3), 298-311.
Morin, A. (2005). Possible links between self-awareness and inner speech: Theoretical background,
underlying mechanisms, and empirical evidence. Journal of Consciousness Studies, 12(4-
5), 115-134.
Negatu, A., & Franklin, S. (2002). An action selection mechanism for 'conscious' software agents.
Cognitive Science Quarterly, 2, 363-386.
Raizer, K., Paraense, E. l. O., & Gudwin, R. R. (this issue). A Cognitive Neuroscience-inspired
Codelet-based Cognitive Architecture for the Control of Artificial Creatures with
Incremental Levels of Machine Consciousness. International Journal of Machine
Ramamurthy, U., & Franklin, S. (2011). Self system in a model of cognition. Paper presented at the
AISB Symposium on Machine Consciousness.
Ramamurthy, U., Franklin, S., & Agrawal, P. (2012). Self System in a Model of Cognition.
International Journal of Machine Consciousness, TBA(TBA).
Ramamurthy, U., Franklin, S., & Agrawal, P. (this issue). Self System in a Model of Cognition.
International Journal of Machine Consciousness, TBA(TBA).
Sanz, R., Hernandez, C., & Sanchez-Escribano, M. G. (this issue). Consciousness, action selection,
meaning and phenomenic anticipation. International Journal of Machine Consciousness.
Taylor, J. (this issue). Can functional and phenomenal consciousness be divided? International
Journal of Machine Consciousness.
Torrance, S. (2007). Two Conceptions of Machine Phenomenality. Journal of Consciousness
Studies, 14(7), 154-166.
Torrance, S. (this issue). Super-intelligence and (Super-)Consciousness. International Journal of
Machine Consciousness.
Ziemke, T. (2007). The embodied self: Theories, hunches and robot models. Journal of
Consciousness Studies, 14(7), 167-179.
ResearchGate has not been able to resolve any citations for this publication.
Full-text available
Information integration is a measure, developed by Tononi and co-researchers, of the capacity for dynamic neural networks to be in informational states which are unique and indivisible. This is supposed to correspond to the intuitive "feel" of a mental state: highly discriminative and yet fundamentally integrated. Recent versions of the theory include a definition of qualia, which measures the geometric contribution of individual neural structures to the overall measure. In this paper, we examine these approaches from two philosophical perspectives, enactivism (externalism) and phenomenal states (internalism). We suggest that a promising enactivist response is to agree with Tononi that consciousness consists of integrated information, but to argue for a radical rethink about the nature of information itself. We argue that information is most naturally viewed as a three-place relation, involving a Bayesian-rational subject, the subject's evidence and the world (as brought under the subject's evolving understanding). To have (or gain) information is to behave in a Bayesian-rational way in response to evidence. Information only ever belongs to whole, rationally behaving agents; information is only "in the brain" from the point of view of a theorist seeking to explain behavior. Rational behavior (hence information) will depend on brain, body and world — embodiment matters. Then, from a phenomenal states perspective, we examine the way that internal states of a network can be not only unique and indivisible but also reflect this coherence as it might exist in an external world. Extending previously published material, we propose that two systems could both score well on traditional integration measures where one had meaningful world-representing states and the other did not. A model which involves iconic learning and depiction is discussed and tested in order to show how internal states can be about the world and how measures of integration influence this process. This retains some of the structure of Tononi's integration measurements but operates within sets of states of the world as filtered by receptors and repertoires of internal states achieved by depiction. This suggests a formalization of qualia which does not ignore world-reflecting content and relates to internal states that aid the conscious organism's ability to act appropriately in the world of which it is conscious. Thus, a common theme emerges: Tononi has good intuition about the necessary nature of consciousness, but his is not the only theory of experience able to do justice to these key intuitions. Tononi's theory has an apparent weakness, in that it treats conscious "information" as something intrinsically meaningless (i.e., without any necessary connection to the world), whereas both the approaches canvassed here naturally relate experienced information to the world.
Full-text available
It is customary to assume that agents receive information from the environment through their sensors. It is equally customary to assume that an agent is capable of information processing and thus of computation. These two assumptions may be misleading, particularly because so much basic theoretical work relies on the concepts of information and computation. In similarity with Dennett's intentional stance, I suggest that a lot of discussions in cognitive science, neuroscience and artificial intelligence is biased by a naïve notion of computation resulting from the adoption of a computational stance. As a case study, I will focus on David Chalmers' view of computation in cognitive agents. In particular, I will challenge the thesis of computational sufficiency. I will argue that computation is no more than the ascription of an abstract model to a series of states and dynamic transitions in a physical agent. As a result, computation is akin to center of masses and other epistemic shortcuts that are insufficient to be the underpinnings of a baffling-yet-physical phenomenon like consciousness.
Full-text available
This paper seeks to identify, clarify, and perhaps rehabilitate the virtual reality metaphor as applied to the goal of understanding consciousness. Some proponents of the metaphor apply it in a way that implies a representational view of experience of a particular, extreme form that is indirect, internal and inactive (what we call "presentational virtualism"). In opposition to this is an application of the metaphor that eschews representation, instead preferring to view experience as direct, external and enactive ("enactive virtualism"). This paper seeks to examine some of the strengths and weaknesses of these virtuality-based positions in order to assist the development of a related, but independent view of experience: virtualist representationalism. Like presentational virtualism, this third view is representational, but like enactive virtualism, it places action centre stage, and does not require, in accounting for the richness of visual experience, global representational "snapshots" corresponding to the entire visual field to be tokened at any one time.
This paper looks closely at previously enunciated axioms that specifically include phenomenology as the sense of a self in a perceptual world. This, we suggest, is an appropriate way of doing science on a first-person phenomenon. The axioms break consciousness down into five key components: presence, imagination, attention, volition and emotions. The paper examines anew the mechanism of each and how they interact to give a single sensation. An abstract architecture, the Kernel Architecture, is introduced as a starting point for building computational models. The thrust of the paper is to relate the axioms to the kernel architecture and indicate that this opens a way of discussing some first-person issues: tests for consciousness, animal consciousness and Higher Order Thought.
The advantages given by machine consciousness to the control of software agents were reported to be very appealing. The main goal of this work is to develop artificial creatures, controlled by cognitive architectures, with different levels of machine consciousness. To fulfil this goal, we propose the application of cognitive neuroscience concepts to incrementally develop a cognitive architecture following the evolutionary steps taken by the animal brain. The triune brain theory proposed by MacLean and also Arrabale's ConsScale will serve as roadmaps to achieve each developmental stage, while iCub - a humanoid robot and its simulator - will serve as a platform for the experiments. A completely codelet-based system "Core" has been implemented, serving the whole architecture.
Inner speech is an aspect of human cognition that has been largely neglected by traditional artificial intelligence research. It is argued here that inner speech is an important contributor to cognition and consciousness and therefore also conscious machines should incorporate it. The realization of inner speech in machines involves also notoriously difficult linguistic issues, like sentence understanding. Here an approach to language processing by associative neural networks is proposed as the solution. This method works without explicit parsing or grammatical rules. The cognitive effects of inner speech arise from its content; inner speech is about something and that content affects the operation and behavior of the cognitive system. Consciousness involves the awareness of the mental content; inner speech is seen here as one tool for introspection that facilitates this awareness. In inner speech we may comment ourselves in a way that we have learned from others. This self-appraisal is seen as a process that leads to enhanced social self-awareness and self-image.
We answer the question raised by the title by developing a neural architecture for the attention control system in animals in a hierarchical manner, following what we conjecture is an evolutionary path. The resulting evolutionary model (based on CODAM at the highest level) and answer to the question allow us to consider both different forms of consciousness as well as how machine consciousness could itself possess a variety of forms.
Research is starting to identify correlations between consciousness and some of the spatiotemporal patterns in the physical brain. For theoretical and practical reasons, the results of experiments on the correlates of consciousness have ambiguous interpretations. At any point in time a number of hypotheses co-exist about and the correlates of consciousness in the brain, which are all compatible with the current experimental results. This paper argues that consciousness should be attributed to any system that exhibits spatiotemporal physical patterns that match the hypotheses about the correlates of consciousness that are compatible with the current experimental results. Some computers running some programs should be attributed consciousness because they produce spatiotemporal patterns in the physical world that match those that are potentially linked with consciousness in the human brain.