ArticlePDF Available

Objects of consciousness

Frontiers
Frontiers in Psychology
Authors:

Abstract and Figures

Current models of visual perception typically assume that human vision estimates true properties of physical objects, properties that exist even if unperceived. However, recent studies of perceptual evolution, using evolutionary games and genetic algorithms, reveal that natural selection often drives true perceptions to extinction when they compete with perceptions tuned to fitness rather than truth: Perception guides adaptive behavior; it does not estimate a preexisting physical truth. Moreover, shifting from evolutionary biology to quantum physics, there is reason to disbelieve in preexisting physical truths: Certain interpretations of quantum theory deny that dynamical properties of physical objects have definite values when unobserved. In some of these interpretations the observer is fundamental, and wave functions are compendia of subjective probabilities, not preexisting elements of physical reality. These two considerations, from evolutionary biology and quantum physics, suggest that current models of object perception require fundamental reformulation. Here we begin such a reformulation, starting with a formal model of consciousness that we call a “conscious agent.” We develop the dynamics of interacting conscious agents, and study how the perception of objects and space-time can emerge from such dynamics. We show that one particular object, the quantum free particle, has a wave function that is identical in form to the harmonic functions that characterize the asymptotic dynamics of conscious agents; particles are vibrations not of strings but of interacting conscious agents. This allows us to reinterpret physical properties such as position, momentum, and energy as properties of interacting conscious agents, rather than as preexisting physical truths. We sketch how this approach might extend to the perception of relativistic quantum objects, and to classical objects of macroscopic scale.
This content is subject to copyright.
ORIGINAL RESEARCH ARTICLE
published: 17 June 2014
doi: 10.3389/fpsyg.2014.00577
Objects of consciousness
Donald D. Hoffman1*and Chetan Prakash2
1Department of Cognitive Sciences, University of California, Irvine, CA, USA
2Department of Mathematics, California State University, San Bernardino, CA, USA
Edited by:
Chris Fields, New Mexico State
University, USA (retired)
Reviewed by:
John Serences, University of
California San Diego, USA
David Marcus Appleby, University of
Sydney, Australia
*Correspondence:
Donald D. Hoffman, Department of
Cognitive Sciences, University of
California, Irvine, CA 92697, USA
e-mail: ddhoff@uci.edu
Current models of visual perception typically assume that human vision estimates true
properties of physical objects, properties that exist even if unperceived. However, recent
studies of perceptual evolution, using evolutionary games and genetic algorithms, reveal
that natural selection often drives true perceptions to extinction when they compete
with perceptions tuned to fitness rather than truth: Perception guides adaptive behavior;
it does not estimate a preexisting physical truth. Moreover, shifting from evolutionary
biology to quantum physics, there is reason to disbelieve in preexisting physical truths:
Certain interpretations of quantum theory deny that dynamical properties of physical
objects have definite values when unobserved. In some of these interpretations the
observer is fundamental, and wave functions are compendia of subjective probabilities,
not preexisting elements of physical reality. These two considerations, from evolutionary
biology and quantum physics, suggest that current models of object perception require
fundamental reformulation. Here we begin such a reformulation, starting with a formal
model of consciousness that we call a “conscious agent. We develop the dynamics of
interacting conscious agents, and study how the perception of objects and space-time
can emerge from such dynamics. We show that one particular object, the quantum free
particle, has a wave function that is identical in form to the harmonic functions that
characterize the asymptotic dynamics of conscious agents; particles are vibrations not of
strings but of interacting conscious agents. This allows us to reinterpret physical properties
such as position, momentum, and energy as properties of interacting conscious agents,
rather than as preexisting physical truths. We sketch how this approach might extend to
the perception of relativistic quantum objects, and to classical objects of macroscopic
scale.
Keywords: consciousness, quantum theory, Markov chains, combination problem, geometric algebra
INTRODUCTION
The human mind is predisposed to believe that physical objects,
when unperceived, still exist with definite shapes and locations
in space. The psychologist Piaget proposed that children start to
develop this belief in “object permanence” around 9 months of
age, and have it firmly entrenched just 9 months later (Piaget,
1954). Further studies suggest that object permanence starts as
early as 3 months of age (Bower, 1974; Baillargeon and DeVos,
1991).
Belief in object permanence remains firmly entrenched into
adulthood, even in the brightest of minds. Abraham Pais said of
Einstein, “We often discussed his notions on objective reality. I
recall that on one walk Einstein suddenly stopped, turned to me
and asked whether I really believed that the moon exists only
when I look at it” (Pais, 1979). Einstein was troubled by inter-
pretations of quantum theory that entail that the moon does not
exist when unperceived.
Belief in object permanence underlies physicalist theories of
the mind-body problem. When Gerald Edelman claimed, for
instance, that “There is now a vast amount of empirical evi-
dence to support the idea that consciousness emerges from the
organization and operation of the brain” he assumed that the
brain exists when unperceived (Edelman, 2004). When Francis
Crick asserted the “astonishing hypothesis” that “You’re noth-
ing but a pack of neurons” he assumed that neurons exist when
unperceived (Crick, 1994).
Object permanence underlies the standard account of evo-
lution by natural selection. As James memorably put it, “The
point which as evolutionists we are bound to hold fast to is
that all the new forms of being that make their appearance are
reallynothingmorethanresultsoftheredistributionofthe
original and unchanging materials. The self-same atoms which,
chaotically dispersed, made the nebula, now, jammed and tem-
porarily caught in peculiar positions, form our brains” (James,
1890). Evolutionary theory, in the standard account, assumes that
atoms, and the replicating molecules that they form, exist when
unperceived.
Object permanence underlies computational models of the
visual perception of objects. David Marr, for instance, claimed
“We ... very definitely do compute explicit properties of the
real visible surfaces out there, and one interesting aspect of the
evolution of visual systems is the gradual movement toward the
difficult task of representing progressively more objective aspects
of the visual world” (Marr, 1982). For Marr, objects and their
www.frontiersin.org June 2014 | Volume 5 | Article 577 |1
Hoffman and Prakash Objects of consciousness
surfaces exist when unperceived, and human vision has evolved
to describe their objective properties.
Bayesian theories of vision assume object permanence. They
model object perception as a process of statistical estimation of
object properties, such as surface shape and reflectance, that exist
when unperceived. As Alan Yuille and Heinrich Bülthoff put it,
“We define vision as perceptual inference, the estimation of scene
properties from an image or sequence of images ...”(Yuille and
Bülthoff, 1996).
There is a long and interesting history of debate about which
properties of objects exist when unperceived. Shape, size, and
position usually make the list. Others, such as taste and color,
often do not. Democritus, a contemporary of Socrates, famously
claimed, “by convention sweet and by convention bitter, by con-
vention hot, by convention cold, by convention color; but in
reality atoms and void” (Taylor, 1999).
Locke proposed that “primary qualities” of objects, such as
“bulk, figure, or motion” exist when unperceived, but that “sec-
ondary properties” of objects, such as “colors and smells” do not.
He then claimed that ... the ideas of primary qualities of bod-
ies are resemblances of them, and their patterns do really exist
in the bodies themselves, but the ideas produced in us by these
secondary qualities have no resemblance of them at all” (Locke,
1690).
Philosophical and scientific debate continues to this day on
whether properties such as color exist when unperceived (Byrne
and Hilbert, 2003; Hoffman, 2006). But object permanence, cer-
tainly regarding shape and position, is so deeply assumed by the
scientific literature in the fields of psychophysics and computa-
tional perception that it is rarely discussed.
It is also assumed in the scientific study of consciousness and
the mind-body problem. Here the widely acknowledged failure
to create a plausible theory forces reflection on basic assump-
tions, including object permanence. But few researchers in fact
give it up. To the contrary, the accepted view is that aspects
of neural dynamics—from quantum-gravity induced collapses
of wavefunctions at microtubules (Hameroff, 1998)toinforma-
tional properties of re-entrant thalamo-cortical loops (Tononi,
2004)—cause, or give rise to, or are identical to, conscious-
ness.AsColinMcGinnputsit,“weknowthatbrainsarethe
de facto causal basis of consciousness, but we have, it seems,
no understanding whatever of how this can be so” (McGinn,
1989).
EVOLUTION AND PERCEPTION
The human mind is predisposed from early childhood to assume
object permanence, to assume that objects have shapes and posi-
tions in space even when the objects and space are unperceived. It
is reasonable to ask whether this assumption is a genuine insight
into the nature of objective reality, or simply a habit that is
perhaps useful but not necessarily insightful.
We can look to evolution for an answer. If we assume that
our perceptual and cognitive capacities have been shaped, at least
in part, by natural selection, then we can use formal models of
evolution, such as evolutionary game theory (Lieberman et al.,
2005; Nowak, 2006) and genetic algorithms (Mitchell, 1998), to
explore if, and under what circumstances, natural selection favors
perceptual representations that are genuine insights into the true
nature of the objective world.
Evaluating object permanence on evolutionary grounds might
seem quixotic, or at least unfair, given that we just noted that
evolutionary theory, as it’s standardly described, assumes object
permanence (e.g., of DNA and the physical bodies of organisms).
How then could one possibly use evolutionary theory to test what
it assumes to be true?
However, Richard Dawkins and others have observed that the
core of evolution by natural selection is an abstract algorithm
with three key components: variation, selection, and retention
(Dennett, 1995; Blackmore, 1999). This abstract algorithm con-
stitutes a “universal Darwinism” that need not assume object
permanence and can be profitably applied in many contexts
beyond biological evolution. Thus, it is possible, without beg-
ging the question, to use formal models of evolution by natural
selection to explore whether object permanence is an insight
or not.
Jerry Fodor has criticized the theory of natural selection itself,
arguing, for instance, that it impales itself with an intensional fal-
lacy, viz., inferring from the premise that evolution is a process
in which creatures with adaptive traits are selected” to the conclu-
sion that “evolution is a process in which creatures are selected
for their adaptive traits” (Fodor and Piattelli-Palmarini, 2010).
However, Fodor’s critique seems wide of the mark (Futuyma,
2010) and the evidence for evolution by natural selection is
overwhelming (Coyne, 2009; Dawkins, 2009).
What,then,dowefindwhenweexploretheevolutionof
perception using evolutionary games and genetic algorithms?
The standard answer, at least among vision scientists, is that we
should find that natural selection favors veridical perceptions,
i.e., perceptions that accurately represent objective properties of
the external world that exist when unperceived. Steven Palmer,
for instance, in a standard graduate-level textbook, states that
“Evolutionarily speaking, visual perception is useful only if it is
reasonably accurate ...Indeed, vision is useful precisely because it
is so accurate. By and large, what you see is what you get. When this
is true, we have what is called veridical perception ...perception
that is consistent with the actual state of affairs in the environ-
ment. This is almost always the case with vision ... ”(Palmer,
1999).
The argument, roughly, is that those of our predecessors whose
perceptions were more veridical had a competitive advantage
over those whose perceptions were less veridical. Thus, the genes
that coded for more veridical perceptions were more likely to
propagate to the next generation. We are, with good probability,
the offspring of those who, in each succeeding generation, per-
ceived more truly, and thus we can be confident that our own
perceptions are, in the normal case, veridical.
The conclusion that natural selection favors veridical percep-
tions is central to current Bayesian models of perception, in which
perceptual systems use Bayesian inference to estimate true prop-
erties of the objective world, properties such as shape, position,
motion, and reflectance (Knill and Richards, 1996; Geisler and
Diehl, 2003). Objects exist and have these properties when unper-
ceived, and the function of perception is to accurately estimate
pre-existing properties.
Frontiers in Psychology | Perception Science June 2014 | Volume 5 | Article 577 |2
Hoffman and Prakash Objects of consciousness
However, when we actually study the evolution of perception
using Monte Carlo simulations of evolutionary games and genetic
algorithms, we find that natural selection does not, in general,
favor perceptions that are true reports of objective properties of
the environment. Instead, it generally favors perceptual strategies
that are tuned to fitness (Mark et al., 2010; Hoffman et al., 2013;
Marion, 2013; Mark, 2013).
Why? Several principles emerge from the simulations. First,
there is no free information. For every bit of information one
obtains about the external world, one must pay a price in energy,
e.g., in calories expended to obtain, process and retain that infor-
mation. And for every calorie expended in perception, one must
go out and kill something and eat it to get that calorie. So
natural selection tends to favor perceptual systems that, ceteris
paribus, use fewer calories. One way to use fewer calories is
to see less truth, especially truth that is not informative about
fitness.
Second, for every bit of information one obtains about the
external world, one must pay a price in time. More information
requires, in general, more time to obtain and process. But in the
real world where predators are on the prowl and prey must be
wary, the race is often to the swift. It is the slower gazelle that
becomes lunch for the swifter cheetah. So natural selection tends
to favor perceptual systems that, ceteris paribus, take less time.
One way to take less time is, again, to see less truth, especially
truth that is not informative about fitness.
Third, in a world where organisms are adapted to niches and
require homeostatic mechanisms, the fitness functions guiding
their evolution are generally not monotonic functions of struc-
tures or quantities in the world. Too much salt or too little can
be devastating; something in between is just right for fitness. The
same goldilocks principle can hold for water, altitude, humidity,
and so on. In these cases, perceptions that are tuned to fitness are
ipso facto not tuned to the true structure of the world, because the
two are not monotonically related; knowing the truth is not just
irrelevant, it can be inimical, to fitness.
Fourth, in the generic case where noise and uncertainty are
endemic to the perceptual process, a strategy that estimates a true
state of the world and then uses the utility associated to that state
to govern its decisions must throw away valuable information
about utility. It will in general be driven to extinction by a strategy
that does not estimate the true state of the world, and instead uses
all the information about utility (Marion, 2013).
Fifth, more complex perceptual systems are more difficult to
evolve. Monte Carlo simulations of genetic algorithms show that
there is a combinatorial explosion in the complexity of the search
required to evolve more complex perceptual systems. This com-
binatorial explosion itself is a selection pressure toward simpler
perceptual systems.
In short, natural selection does not favor perceptual systems
that see the truth in whole or in part. Instead, it favors per-
ceptions that are fast, cheap, and tailored to guide behaviors
needed to survive and reproduce. Perception is not about truth,
it’s about having kids. Genes coding for perceptual systems that
increase the probability of having kids are ipso facto the genes
that are more likely to code for perceptual systems in the next
generation.
THE INTERFACE THEORY OF PERCEPTION
Natural selection favors perceptions that are useful though not
true. This might seem counterintuitive, even to experts in percep-
tion. Palmer, for instance, in the quote above, makes the plausible
claim that “vision is useful precisely because it is so accurate”
(Palmer, 1999). Geisler and Diehl agree, taking it as obvious that
“In general, (perceptual) estimates that are nearer the truth have
greater utility than those that are wide of the mark” (Geisler and
Diehl, 2002). Feldman also takes it as obvious that “it is clearly
desirable (say from an evolutionary point of view) for an organ-
ism to achieve veridical percepts of the world” (Feldman, 2013).
Knill and Richards concur that vision ... involves the evolu-
tion of an organism’s visual system to match the structure of the
world ...”(Knill and Richards, 1996).
This assumption that perceptions are useful to the extent that
they are true is prima facie plausible, and it comports well with the
assumption of object permanence. For if our perceptions report
to us a three-dimensional world containing objects with specific
shapes and positions, and if these perceptual reports have been
shaped by evolution to be true, then we can be confident that
those objects really do, in the normal case, exist and have their
positions and shapes even when unperceived.
So we find it plausible that perceptions are useful only if true,
and we find it deeply counterintuitive to think otherwise. But
studies with evolutionary games and genetic algorithms flatly
contradict this deeply held assumption. Clearly our intuitions
need a little help here. How can we try to understand perceptions
that are useful but not true?
Fortunately, developments in computer technology have pro-
vided a convenient and helpful metaphor: the desktop of a win-
dows interface (Hoffman, 1998, 2009, 2011, 2012, 2013; Mausfeld,
2002; Koenderink, 2011a; Hoffman and Singh, 2012; Singh and
Hoffman, 2013). Suppose you are editing a text file and that the
icon for that file is a blue rectangle sitting in the lower left corner
of the desktop. If you click on that icon you can open the file and
revise its text. If you drag that icon to the trash, you can delete the
le.Ifyoudragittotheiconforanexternalharddrive,youcan
create a backup of the file. So the icon is quite useful.
But is it true? Well, the only visible properties of the icon are its
position, shape, and color. Do these properties of the icon resem-
ble the true properties of the file? Clearly not. The file is not blue
or rectangular, and it’s probably not in the lower left corner of the
computer. Indeed, files don’t have a color or shape, and needn’t
have a well-defined position (e.g., the bits of the file could be
spread widely over memory). So to even ask if the properties of
the icon are true is to make a category error, and to completely
misunderstand the purpose of the interface. One can reasonably
ask whether the icon is usefully related to the file, but not whether
it truly resembles the file.
Indeed, a critical function of the interface is to hide the truth.
Most computer users don’t want to see the complexity of the inte-
grated circuits, voltages, and magnetic fields that are busy behind
the scenes when they edit a file. If they had to deal with that
complexity, they might never finish their work on the file. So
the interface is designed to allow the user to interact effectively
with the computer while remaining largely ignorant of its true
architecture.
www.frontiersin.org June 2014 | Volume 5 | Article 577 |3
Hoffman and Prakash Objects of consciousness
Ignorant, also, of its true causal structure. When a user drags
a file icon to an icon of an external drive, it looks obvious that
themovementofthefileicontothedriveiconcauses the file to
be copied. But this is just a useful fiction. The movement of the
file icon causes nothing in the computer. It simply serves to guide
the user’s operation of a mouse, triggering a complex chain of
causal events inside the computer, completely hidden from the
user. Forcing the user to see the true causal chain would be an
impediment, not a help.
Turning now to apply the interface metaphor to human per-
ception, the idea is that natural selection has not shaped our per-
ceptions to be insights into the true structure and causal nature
of objective reality, but has instead shaped our perceptions to be
a species-specific user interface, fashioned to guide the behav-
iors that we need to survive and reproduce. Space and time are
the desktop of our perceptual interface, and three-dimensional
objects are icons on that desktop.
Our interface gives the impression that it reveals true cause and
effect relations. When one billiard ball hits a second, it certainly
looks as though the first causes the second to careen away. But this
appearance of cause and effect is simply a useful fiction, just as it
is for the icons on the computer desktop.
Thereisanobviousrejoinder:“Ifthatcobraisjustaniconof
your interface with no causal powers, why don’t you grab it by the
tail?” The answer is straightforward: “I don’t grab the cobra for
the same reason I don’t carelessly drag my file icon to the trash—I
could lose a lot of work. I don’t take my icons literally:Thefile,
unlike its icon, is not literally blue or rectangular. But I do take
my icons seriously.”
Similarly, evolution has shaped us with a species-specific inter-
face whose icons we must take seriously. If there is a cliff, don’t
stepover.Ifthereisacobra,dontgrabitstail.Naturalselection
has endowed us with perceptions that function to guide adaptive
behaviors, and we ignore them at our own peril.
But, given that we must take our perceptions seriously, it does
not follow that we must take them literally. Such an inference is
natural, in the sense that most of us, even the brightest, make it
automatically. When Samuel Johnson heard Berkeley’s theory that
“To be is to be perceived” he kicked a stone and said, “I refute it
thus!” (Boswell, 1986) Johnson observed that one must take the
stone seriously or risk injury. From this Johnson concluded that
one must take the stone literally. But this inference is fallacious.
One might object that there still is an important sense in which
our perceptual icon of, say, a cobra does resemble the true objec-
tive reality: The consequences for an observer of grabbing the tail
of the cobra are precisely the consequences that would obtain if
the objective reality were in fact a cobra. Perceptions and internal
information-bearing structures are useful for fitness-preserving
or enhancing behavior because there is some mutual information
between the predicted utility of a behavior (like escaping) and its
actual utility. If there’s no mutual information and no mechanism
for increasing mutual information, fitness is low and stays that
way. Here we use mutual information in the sense of standard
information theory (Cover and Thomas, 2006).
This point is well-taken. Our perceptual icons do give us gen-
uine information about fitness, and fitness can be considered an
aspect of objective reality. Indeed, in Gibson’s ecological theory of
perception, our perceptions primarily resonate to “affordances,
those aspects of the objective world that have important con-
sequences for fitness (Gibson, 1979). While we disagree with
Gibon’s direct realism and denial of information processing in
perception, we agree with his emphasis on the tuning of percep-
tion to fitness.
So we must clarify the relationship between truth and fitness.
In evolutionary theory it is as follows. If Wdenotes the objec-
tive world then, for a fixed organism, state, and action, we can
think of a fitness function to be a function f:W[0,1], which
assigns to each state wof Wa fitness value f(w). If, for instance,
the organism is a hungry cheetah and the action is eating, then f
might assign a high fitness value to world state win which fresh
raw meat is available; but if the organism is a hungry cow then f
might assign a low fitness value to the same state w.
If the true probabilities of states in the world are given by a
probability measure mon W, then one can define a new probabil-
ity measure mf on W, where for any event Aof W,mf (A) is simply
the integral of fover Awith respect to m;mf must of course be
normalized so that mf (W)=1.
And here is the key point. A perceptual system that is tuned
to maximize the mutual information with mwill not, in gen-
eral, maximize mutual information with mf (Cover and Thomas,
2006). Being tuned to truth, i.e., maximizing mutual information
with m, is not the same as being tuned to fitness, i.e., maximiz-
ing mutual information with mf. Indeed, depending on the fitness
function f, a perceptual system tuned to truth might carry little or
no information about fitness, and vice versa. It is in this sense that
the interface theory of perception claims that our perceptions are
tuned to fitness rather than truth.
There is another rejoinder: “The interface metaphor is noth-
ingnew.Physicistshavetoldusformorethanacenturythat
solid objects are really mostly empty space. So an apparently solid
stone isn’t the true reality, but its atoms and subatomic particles
are. Physicists have indeed said this since Rutherford published
his theory of the atomic nucleus in 1911 (Rutherford, 1911). But
the interface metaphor says something more radical. It says that
space and time themselves are just a desktop, and that anything
in space and time, including atoms and subatomic particles, are
themselves simply icons. It’s not just the moon that isn’t there
when one doesn’t look, it’s the atoms, leptons and quarks them-
selves that aren’t there. Object permanence fails for microscopic
objects just as it does for macroscopic.
This claim is, to contemporary sensibilities, radical. But there
is a perspective on the intellectual evolution of humanity over the
last few centuries for which the interface theory seems a natural
next step. According to this perspective, humanity has gradually
been letting go of the false belief that the way H. sapiens sees the
worldisaninsightintoobjectivereality.
Many ancient cultures, including the pre-Socratic Greeks,
believed the world was flat, for the obvious reason that it looks
that way. Aristotle became persuaded, on empirical grounds, that
the earth is spherical, and this view gradually spread to other cul-
tures. Reality, we learned, departed in important respects from
some of our perceptions.
But then a geocentric model of the universe, in which the earth
is at the center and everything revolves around it, still held sway.
Frontiers in Psychology | Perception Science June 2014 | Volume 5 | Article 577 |4
Hoffman and Prakash Objects of consciousness
Why? Because that’s the way things look to our unaided percep-
tions. The earth looks like it’s not moving, and the sun, moon,
planets, and stars look like they circle a stationary earth. Not until
the work of Copernicus and Kepler did we recognize that once
again reality differs, in important respects, from our perceptions.
This was difficult to swallow. Galileo was forced to recant in the
Vatican basement, and Giordano Bruno was burned at the stake.
But we finally, and painfully, accepted the mismatch between our
perceptions and certain aspects of reality.
The interface theory entails that these first two steps were mere
warm up. The next step in the intellectual history of H. sapiens is
a big one. We must recognize that all of our perceptions of space,
time and objects no more reflect reality than does our perception
ofaflatearth.Itsnotjustthisorthataspectofourperceptions
that must be corrected, it is the entire framework of a space-time
containing objects, the fundamental organization of our percep-
tual systems, that must be recognized as a mere species-specific
mode of perception rather than an insight into objective reality.
By this time it should be clear that, if the arguments given here
are sound, then the current Bayesian models of object perception
need more than tinkering around the edges, they need fundamen-
tal transformation. And this transformation will necessarily have
ramifications for scientific questions well-beyond the confines of
computational models of object perception.
One example is the mind-body problem. A theory in which
objects and space-time do not exist unperceived and do not have
causal powers, cannot propose that neurons—which by hypoth-
esis do not exist unperceived and do not have causal powers—
cause any of our behaviors or conscious experiences. This is so
contrary to contemporary thought in this field that it is likely to
be taken as a reductio of the view rather than as an alternative
direction of inquiry for a field that has yet to construct a plausible
theory.
DEFINITION OF CONSCIOUS AGENTS
If our reasoning has been sound, then space-time and three-
dimensional objects have no causal powers and do not exist
unperceived. Therefore, we need a fundamentally new foundation
from which to construct a theory of objects. Here we explore the
possibility that consciousness is that new foundation, and seek a
mathematically precise theory. The idea is that a theory of objects
requires, first, a theory of subjects.
This is, of course, a non-trivial endeavor. Frank Wilczek, when
discussing the interpretation of quantum theory, said, “The rel-
evant literature is famously contentious and obscure. I believe it
will remain so until someone constructs, within the formalism of
quantum mechanics, an “observer, that is, a model entity whose
states correspond to a recognizable caricature of conscious aware-
ness ...That is a formidable project, extending well-beyond what
is conventionally considered physics” (Wilczek, 2006).
The approach we take toward constructing a theory of con-
sciousness is similar to the approach Alan Turing took toward
constructing a theory of computation. Turing proposed a simple
but rigorous formalism, now called the Turing machine (Turing ,
1937; Herken, 1988). It consists of six components: (1) a finite
set of states, (2) a finite set of symbols, (3) a special blank sym-
bol, (4) a finite set of input symbols, (5) a start state, (6) a set of
halt states, and (7) a finite set of simple transition rules (Hopcroft
et al., 2006).
Turing and others then conjectured that a function is algorith-
mically computable if and only if it is computable by a Turing
machine. This “Church-Turing Thesis” can’t be proven, but it
could in principle be falsified by a counterexample, e.g., by some
example of a procedure that everyone agreed was computable but
for which no Turing machine existed. No counterexample has yet
been found, and the Church-Turing thesis is considered secure,
even definitional.
Similarly, to construct a theory of consciousness we propose a
simple but rigorous formalism called a conscious agent, consisting
ofsixcomponents.Wethenstatetheconscious agent thesis,which
claims that every property of consciousness can be represented
by some property of a conscious agent or system of interacting
conscious agents. The hope is to start with a small and simple
set of definitions and assumptions, and then to have a complete
theory of consciousness arise as a series of theorems and proofs
(or simulations, when complexity precludes proof). We want a
theory of consciousness qua consciousness, i.e., of consciousness
on its own terms, not as something derivative or emergent from a
prior physical world.
No doubt this approach will strike many as prima facie absurd.
It is a commonplace in cognitive neuroscience, for instance, that
most of our mental processes are unconscious processes (Bargh
and Morsella, 2008). The standard account holds that well more
than 90% of mental processes proceed without conscious aware-
ness. Therefore, the proposal that consciousness is fundamental
is, to contemporary thought, an amusing anachronism not worth
serious consideration.
This critique is apt. It’s clear from many experiments that each
of us is indeed unaware of most of the mental processes underly-
ing our actions and conscious perceptions. But this is no surprise,
given the interface theory of perception. Our perceptual inter-
faces have been shaped by natural selection to guide, quickly and
cheaply, behaviors that are adaptive in our niche. They have not
been shaped to provide exhaustive insights into truth. In con-
sequence, our perceptions have endogenous limits to the range
and complexity of their representations. It was not adaptive to be
aware of most of our mental processing, just as it was not adaptive
tobeawareofhowourkidneysfilterblood.
We must be careful not to assume that limitations of our
species-specific perceptions are insights into the true nature of
reality. My friend’s mind is not directly conscious to me, but that
does not entail that my friend is unconscious. Similarly, most of
my mental processes are not directly conscious to me, but that
does not entail that they are unconscious. Our perceptual sys-
tems have finite capacity, and will therefore inevitably simplify
and omit. We are well-advised not to mistake our omissions and
simplifications for insights into reality.
There are of course many other critiques of an approach
that takes consciousness to be fundamental: How can such an
approach explain matter, the fundamental forces, the Big Bang,
the genesis and structure of space-time, the laws of physics,
evolution by natural selection, and the many neural correlates
of consciousness? These are non-trivial challenges that must be
faced by the theory of conscious agents. But for the moment we
www.frontiersin.org June 2014 | Volume 5 | Article 577 |5
Hoffman and Prakash Objects of consciousness
will postpone them and develop the theory of conscious agents
itself.
Conscious agent is a technical term, with a precise mathemat-
ical definition that will be presented shortly. To understand the
technical term, it can be helpful to have some intuitions that moti-
vate the definition. The intuitions are just intuitions, and if they
don’t help they can be dropped. What does the heavy lifting is the
definition itself.
A key intuition is that consciousness involves three processes:
perception,decision,andaction.
In the process of perception, a conscious agent interacts with
the world and, in consequence, has conscious experiences.
In the process of decision, a conscious agent chooses what
actions to take based on the conscious experiences it has.
In the process of action, the conscious agent interacts with the
world in light of the decision it has taken, and affects the state of
the world.
Another intuition is that we want to avoid unnecessarily
restrictive assumptions in constructing a theory of consciousness.
Our conscious visual experience of nearby space, for instance,
is approximately Euclidean. But it would be an unnecessary
restriction to require that all of our perceptual experiences be
represented by Euclidean spaces.
However it does seem necessary to discuss the probability of
having a conscious experience, of making a particular decision,
and of making a particular change in the world through action.
Thus, it seems necessary to assume that we can represent the
world, our conscious experiences, and our possible actions with
probability spaces.
We also want to avoid unnecessarily restrictive assumptions
about the processes of perception, decision, and action. We might
find, for instance, that a particular decision process maximizes
expected utility, or minimizes expected risk, or builds an explicit
model of the self. But it would be an unnecessary restriction to
require this of all decisions.
However, when considering the processes of perception, deci-
sion and action, it does seem necessary to discuss conditional
probability. It seems necessary, for instance, to discuss the con-
ditional probability of deciding to take a specific action given a
specific conscious experience, the conditional probability of a par-
ticular change in the world given that a specific action is taken,
and the conditional probability of a specific conscious experience
given a specific state of the world.
A general way to model such conditional probabilities is by
the mathematical formalism of Markovian kernels (Revuz, 1984).
One can think of a Markovian kernel as simply an indexed list
of probability measures. In the case of perception, for instance,
a Markovian kernel might specify that if the state of the world is
w1,thenhereisalistoftheprobabilitiesforthevariousconscious
experiencesthatmightresult,butifthestateoftheworldisw2,
then here is a different list of the probabilities for the various con-
scious experiences that might result, and so on for all the possible
states of the world. A Markovian kernel on a finite set of states can
be written as matrix in which the entries in each row sum to 1.
A Markovian kernel can also be thought of as an informa-
tion channel. Cover and Thomas, for instance, define “a discrete
channel to be a system consisting of an input alphabet Xand
output alphabet Yand a probability transition matrix p(x|y)that
expresses the probability of observing the output symbol ygiven
that we send the symbol x”(Cover and Thomas, 2006). Thus, a
discrete channel is simply a Markovian kernel.
So, each time a conscious agent interacts with the world and,
in consequence, has a conscious experience, we can think of this
interaction as a message being passed from the world to the con-
scious agent over a channel. Similarly, each time the conscious
agent has a conscious experience and, in consequence, decides on
an action to take, we can think of this decision as a message being
passed over a channel within the conscious agent itself. And when
the conscious agent then takes the action and, in consequence,
alters the state of the world, we can think of this as a message
being passed from the conscious agent to the world over a chan-
nel. In the discrete case, we can keep track of the number of times
each channel is used. That is, we can count the number of mes-
sages that are passed over each channel. Assuming that all three
channels (perception, decision, action) all work in lock step, we
can use one counter, N,tokeeptrackofthenumberofmessages
that are passed.
These are some of the intuitions that underlie the definition
of conscious agent that we will present. These intuitions can be
represented pictorially in a diagram, as shown in Figure 1.The
channel Ptransmits messages from the world W, leading to con-
scious experiences X. The channel Dtransmits messages from X,
leading to actions G. The channel Atransmits messages from G
that are received as new states of W. The counter Nis an inte-
ger that keeps track of the number of messages that are passed on
each channel.
In what follows we will be using the notion of a measurable
space. Recall that a measurable space, (X,X), is a set Xtogether
with a collection Xof subsets of X, called events, that satisfies three
properties: (1) Xis in X;(2)Xis closed under complement (i.e., if
asetAis in Xthen the complement of Ais also in X); and (3) Xis
closed under countable union. The collection of events Xis a σ-
algebra (Athreya and Lahiri, 2006). A probability measure assigns
a probability to each event in X.
With these intuitions, we now present the formal definition of
a conscious agent where, for the moment, we simply assume that
the world is a measurable space (W,W).
Definition 1.Aconscious agent,C, is a six-tuple
C=((X,X),(G,G),P,D,A,N)),(1)
where:
(1) (X,X)and(G,G) are measurable spaces;
(2) P:W×X[0, 1], D:X×G[0, 1], A:G×W[0, 1]
are Markovian kernels; and
(3) Nis an integer.
For convenience we will often write a conscious agent Cas
C=(X,G,P,D,A,N),(2)
omitting the σ-algebras.
Frontiers in Psychology | Perception Science June 2014 | Volume 5 | Article 577 |6
Hoffman and Prakash Objects of consciousness
FIGURE 1 | A diagram of a conscious agent. A conscious agent has six
components as illustrated here. The maps P,D,andAcan be thought of as
communication channels.
Given that P,D,andAare channels, each has a channel
capacity, viz., a highest rate of bits per channel use, at which
information can be sent across the channel with arbitrarily low
chance of error (Cover and Thomas, 2006).
The formal structure of a conscious agent, like that of a Turing
machine, is simple. Nevertheless, we will propose, in the next sec-
tion, a “conscious-agent thesis” which, like the Church-Turing
thesis, claims wide application for the formalism.
CONSCIOUS REALISM
One glaring feature of the definition of a conscious agent is that
it involves the world, W. This is not an arbitrary choice; Wis
required to define the perceptual map PandactionmapAof the
conscious agent.
This raises the question: What is the world? If we take it to be
the space-time world of physics, then the formalism of conscious
agents is dualistic, with some components (e.g., Xand G) refer-
ring to consciousness and another, viz., W, referring to a physical
world.
We want a non-dualistic theory. Indeed, the monism we
want takes consciousness to be fundamental. The formal-
ism of conscious agents provides a precise way to state this
monism.
Hypothesis 1. Conscious realism:TheworldWconsists entirely
of conscious agents.
Conscious realism is a precise hypothesis that, of course, might
be precisely wrong. We can explore its theoretical implications
in the normal scientific manner to see if they comport well with
FIGURE 2 | Two conscious agents, C1and C2.Each is part of the world W
for the other conscious agent. The lower part of the diagram represents C1
and the upper part represents C2. This creates an undirected combination
of C1and C2, a concept we define in section The Combination Problem.
existing data and theories, and make predictions that are novel,
interesting and testable.
TWO CONSCIOUS AGENTS
Conscious realism can be expressed mathematically in a simple
form. Consider the elementary case, in which the world Wof one
conscious agent,
C1=(X1,G1,P1,D1,A1,N1),(3)
contains just C1and one other agent,
C2=(X2,G2,P2,D2,A2,N2),(4)
and vice versa. This is illustrated in Figure 2.
Observe that although Wis the world it cannot properly be
called, in this example, the external world of C1or of C2because
C1and C2are each part of W. This construction of Wrequires the
compatibility conditions
P1=A2,(5)
P2=A1,(6)
N1=N2.(7)
These conditions mean that the perceptions of one conscious
agent are identical to the actions of the other, and that their coun-
ters are synchronized. To understand this, recall that we can think
of P1,P2,A1,andA2as information channels. So interpreted, con-
ditions (5) and (6) state that the action channel of one agent is
the same information channel as the perception channel of the
other agent. Condition (7) states that the channels of both agents
operate in synchrony.
www.frontiersin.org June 2014 | Volume 5 | Article 577 |7
Hoffman and Prakash Objects of consciousness
FIGURE 3 | Two adjacent conscious agents, C1and C2.Each agent
receives messages from the other (indicated by the concave receivers) and
sends messages to the other (indicated by the semicircular transmitters).
Arrows show the direction of information flow.
If two conscious agents C1and C2satisfy the commuting dia-
gram of Figure 2, then we say that they are joined or adjacent:the
experiences and actions of C1affect the probabilities of experi-
ences and actions for C2andviceversa.Figure 3 illustrates the
ideassofar.
We can simplify the diagrams further and simply write C1C2
to represent two adjacent conscious agents.
THREE CONSCIOUS AGENTS
Any number of conscious agents can be joined. Consider the case
of three conscious agents,
Ci=(Xi,Gi,Pi,Di,Ai,Ni),i=1,2,3.(8)
This is illustrated in Figure 4,andcompactlyinFigure 5.
Because C1interacts with C2and C3, its perceptions are
affected by both C2and C3. Thus, its perception kernel,
P1, must reflect the inputs of C2and C3.Wewriteitas
follows:
P1=P12 P13 :(G2×G3)×X1→[0,1],(9)
where
X1=σ(X12 ×X13),(10)
(X12,X12 ) is the measurable space of perceptions that C1can
receive from C2,and(X13,X13 ) is the measurable space of
perceptions that C1can receive from C3,andσ(X12 ×X13)
denotes the σ-algebra generated by the Cartesian product of
X12 and X13. The tensor product P1of (9) is given by the
formula
P1(g2,g3),(x12,x13 )=P12(g2,x12)P13(g3,x13),(11)
where g2G2,g3G3,x12 X12,andx13 X13 . Note that (11)
allows that the perceptions that C1gets from C2could be entirely
different from those it gets from C3, and expresses the probabilis-
tic independence of these perceptual inputs. In general, X12 need
not be identical to X13, since the kinds of perceptions that C1can
FIGURE 4 | Three adjacent conscious agents. The third agent is
replicated at the top and bottom of the diagram for visual simplicity.
receive from C2need not be the same as the kinds of perceptions
that C1can receive from C3.
Because C1interacts with C2and C3, its actions affect both.
However, the way C1acts on C2might differ from how it acts on
C3, and the definition of its action kernel, A1, must allow for this
difference of action. Therefore, we define the action kernel, A1,to
be the tensor product
A1=A12 A13 :G1×σ(X2×X3)→[0,1],(12)
where
G1=G12 ×G13,(13)
(G12,G12 ) is the measurable space of actions that C1can take on
C2,and(G13,G13) is the measurable space of actions that C1can
take on C3.
Frontiers in Psychology | Perception Science June 2014 | Volume 5 | Article 577 |8
Hoffman and Prakash Objects of consciousness
FIGURE 5 | Three adjacent conscious agents. This is a compact
representation of the diagram in Figure 4.
FIGURE 6 | Three conscious agents whose graph is complete.
In this situation, the three conscious agents have the property
that every pair is adjacent; we say that the graph of the three agents
is complete.ThisisillustratedinFigure 6.
Sofarwehaveconsideredjoinsthatareundirected,inthe
sense that if C1sends a message to C2then C2sends a message
to C1. However, it is also possible for conscious agents to have
directed joins.ThisisillustratedinFigure 7.Inthiscase,C1sends
amessagetoC2and receives a message from C3, but receives no
FIGURE 7 | Three conscious agents with directed joins. Here we
assume A1=P2,A2=P3,andA3=P1.
FIGURE 8 | Simplified graph of three conscious agents with directed
joins.
message from C2and sends no message to C3. Similar remarks
hold, mutatis mutandis,forC2and C3.
Figure 7 can be simplified as shown in Figure 8.
Directed joins can model the standard situation in visual
perception, in which there are multiple levels of visual represen-
tations, one level building on others below it. For instance, at one
level there could be the construction of 2D motions based on a
solution to the correspondence problem; at the next level there
could be a computation of 3D structure from motion, based on
the 2D motions computed at the earlier level (Marr, 1982). So
www.frontiersin.org June 2014 | Volume 5 | Article 577 |9
Hoffman and Prakash Objects of consciousness
an agent C1might solve the correspondence problem and pass its
solution to C2, which solves the structure-from-motion problem,
and then passes its solution to C3, which does object recognition.
We can join any number of conscious agents into any multi-
graph, where nodes denote agents and edges denote directed or
undirected joins between agents (Chartrand and Ping, 2012). The
nodes can have any finite degree, i.e., any finite number of edges.
As a special case, conscious agents can be joined to form deter-
ministic or non-deterministic cellular automata (Ceccherini-
Silberstein and Coornaert, 2010) and universal Turing machines
(Cook, 2004).
DYNAMICS OF TWO CONSCIOUS AGENTS
Two conscious agents
C1=(X1,G1,P1,D1,A1,N1),(14)
and
C2=(X2,G2,P2,D2,A2,N2),(15)
can be joined, as illustrated in Figure 2, to form a dynamical
system. Here we discuss basic properties of this dynamics.
The state space, E, of the dynamics is E=X1×G1×X2×G2,
with product σ-algebra E. The idea is that for the current step,
tN, of the dynamics, the state can be described by the vec-
tor (x1(t),g1(t),x2(t),g2(t)), and based on this state four actions
happen simultaneously: (1) agent C1experiences the perception
x1(t)X1and decides, according to D1, on a specific action
g1(t)G1to take at step t+1; (2) agent C1, using A1, takes
the action g1(t)G1;(3)agentC2experiences the perception
x2(t)X2and decides, according to D2, on a specific action
g2(t)G2to take at step t+1; (4) agent C2, using A2, takes the
action g2(t)G2.
Thus, the state evolves by a kernel
L:E×E→[0,1],(16)
which is given, for state e=(x1(t),g1(t),x2(t),g2(t)) Eat time
tand event BE, comprised of a measurable set of states of the
form (x1(t+1), g1(t+1), x2(t+1), g2(t+1)), by
L(e,B)=B
A2(g2(t),dx1(t+1))D1(x1(t),dg1(t+1))A1(g1(t),
dx2(t+1))D2(x2(t),dg2(t+1)).(17)
This is not kernel composition; it is simply multiplication of the
four kernel values. The idea is that at each step of the dynamics
each of the four kernels acts simultaneously and independently of
the others to transition the state (x1(t),g1(t),x2(t),g2(t)) to the
next state (dx1(t+1), dg1(t+1), dx2(t+1), dg2(t+1)).
FIRST EXAMPLE OF ASYMPTOTIC BEHAVIOR
For concreteness, consider the simplest possible case where (1)
X1,G1,X2,andG2each have only two states which, using Dirac
notation, we denote |0and |1,and(2)eachofthekernelsA2,
D1,A1,andD2is a 2 ×2 identity matrix.
There are total of 24=16 possible states for the dynamics of
the two agents, which we can write as |0000, |0001, |0010,...
|1111, where the leftmost digit is the state of X1, the next digit
the state of G1,thenextofX2, and the rightmost of G2.
The asymptotic (i.e., long-term) dynamics of these two con-
scious agents can be characterized by its absorbing sets and their
periods. Recall that an absorbing set for such a dynamics is
a smallest set of states that acts like a roach motel: once the
dynamics enters the absorbing set it never leaves, and it forever
cycles periodically through the states within that absorbing set.
It is straightforward to verify that for the simple dynamics of
conscious agents just described, the asymptotic behavior is as
follows:
(1) {|0000} is absorbing with period 1;
(2) {|1111} is absorbing with period 1;
(3) {|0101, |1010}isabsorbingwithperiod2;
(4) {|0001, |1000, |0100, |0010} is absorbing with period 4,
and cycles in that order;
(5) {|0011, |1001, |1100, |0110} is absorbing with period 4,
and cycles in that order;
(6) {|0111, |1011, |1101, |1110} is absorbing with period 4,
and cycles in that order.
SECOND EXAMPLE OF ASYMPTOTIC BEHAVIOR
If we alter this dynamics by simply changing the kernel D1from
an identity matrix to the matrix D1=((0,1),(1,0)), then the
asymptotic behavior changes to the following:
(1) {|0000, |0100, |0110, |0111, |1111, |1011, |1001, |1000}
is absorbing with period 8, and cycles in that order;
(2) {|0001, |1100, |0010, |0101, |1110, |0011, |1101, |1010}
is absorbing with period 8, and cycles in that order.
If instead of changing D1we changed D2(or A1or A2)to
((0,1),(1,0)), we would get the same asymptotic behavior. Thus,
in general, an asymptotic behavior corresponds to an equivalence
class of interacting conscious agents.
The range of possible dynamics of pairs of conscious agents
is huge, and grows as one increases the richness of the state
space Eand, therefore, the set of possible kernels. The possibil-
ities increase as one considers dynamical systems of three or more
conscious agents, with all the possible directed and undirected
joins among them, forming countless connected multi-graphs or
amenable groups.
With this brief introduction to the dynamics of conscious
agents we are now in a position to state another key hypothesis.
Hypothesis 2.Conscious-agent thesis. Every property of con-
sciousness can be represented by some property of a dynamical
system of conscious agents.
THE COMBINATION PROBLEM
Conscious realism and the conscious-agent thesis are strong
claims, and face a tough challenge: Any theory that claims con-
sciousness is fundamental must solve the combination problem
(Seager, 1995; Goff, 2009; Blamauer, 2011; Coleman, 2014).
William Seager describes this as “the problem of explaining how
Frontiers in Psychology | Perception Science June 2014 | Volume 5 | Article 577 |10
Hoffman and Prakash Objects of consciousness
the myriad elements of ‘atomic consciousness’ can be combined
into a new, complex and rich consciousness such as that we
possess” (Seager, 1995).
William James saw the problem back in 1890: “Where the ele-
mental units are supposed to be feelings, the case is in no wise
altered. Take a hundred of them, shuffle them and pack them as
close together as you can (whatever that may mean); still each
remains the same feeling it always was, shut in its own skin, win-
dowless, ignorant of what the other feelings are and mean. There
would be a hundred-and-first feeling there, if, when a group or
series of such feelings were set up, a consciousness belonging to
the group as such should emerge. And this 101st feeling would
be a totally new fact; the 100 original feelings might, by a curious
physical law, be a signal for its creation, when they came together;
but they would have no substantial identity with it, nor it with
them, and one could never deduce the one from the others, or
(in any intelligible sense) say that they evolved it. ... The pri-
vate minds do not agglomerate into a higher compound mind”
(James, 1890/2007).
There are really two combination problems. The first is
the combination of phenomenal experiences, i.e., of qualia. For
instance, one’s taste experiences of salt, garlic, onion, basil and
tomato are somehow combined into the novel taste experience
of a delicious pasta sauce. What is the relationship between one’s
experiences of the ingredients and one’s experience of the sauce?
The second problem is the combination of subjects of expe-
riences. In the sauce example, a single subject experiences the
ingredients and the sauce, so the problem is to combine experi-
ences within a single subject. But how can we combine subjects
themselves to create a new unified subject? Each subject has its
point of view. How can different points of view be combined to
give a new, single, point of view?
No rigorous theory has been given for combining phenome-
nal experiences, but there is hope. Sam Coleman, for instance,
is optimistic but notes that “there will have to be some sort of
qualitative blending or pooling among the qualities carried by
each ultimate: if each ultimate’s quality showed up as such in the
macro-experience, it would lack the notable homogeneity of (e.g.,
color experience, and plausibly some mixing of basic qualities is
required to obtain the qualities of macro-experience” (Coleman,
2014).
Likewise, no rigorous theory has been given for combining
subjects. But here there is little hope. Thomas Nagel, for instance,
says “Presumably the components out of which a point of view
is constructed would not themselves have to have points of view”
(Nagel, 1979). Coleman goes further, saying, “it is impossible to
explain the generation of a macro-subject (like one of us) in terms
of the assembly of micro-subjects, for, as I show, subjects cannot
combine” (Coleman, 2014).
So at present there is the hopeful, but unsolved, problem of
combining experiences and the hopeless problem of combining
subjects.
The theory of conscious agents provides two ways to combine
conscious agents: undirected combinations and directed combi-
nations. We prove this, and then consider the implications for
solving the problems of combining experiences and combining
subjects.
Theorem 1.(Undirected Join Theorem.) An undirected join of
two conscious agents creates a new conscious agent.
Proof .(By construction.) Let two conscious agents
C1=((X1,X1),(G1,G1),P1,D1,A1,N1),(18)
and
C2=((X2,X2),(G2,G2),P2,D2,A2,N2),(19)
have an undirected join. Let
C=((X,X),(G,G),P,D,A,N)) (20)
where
X=X1×X2,(21)
G=G1×G2,(22)
P=P1P2:GT×X→[0,1],(23)
D=D1D2:X×G→[0,1],(24)
A=A1A2:G×XT→[0,1],(25)
N=N1=N2,(26)
where superscript Tindicates transpose, e.g., XT=X2×X1;
where Xis the σ-algebra generated by the Cartesian product of
X1and X2;whereGis the σ-algebra generated by G1and G2;and
where the Markovian kernels P,D,andAare given explicitly, in
the discrete case, by
P((g2,g1),(x1,x2)) =P1P2((g2,g1),(x1,x2))
=P1(g2,x1)P2(g1,x2),(27)
D((x1,x2),(g1,g2)) =D1D2((x1,x2),(g1,g2))
=D1(x1,g1)D2(x2,g2),(28)
A((g1,g2),(x2,x1)) =A1A2((g1,g2),(x2,x1))
=A1(g1,x2)A2(g2,x1),(29)
where g1G1,g2G2,x1X1,andx2X2.ThenCsatisfies
the definition of a conscious agent.
Thus, the undirected join of two conscious agents (illustrated
in Figure 2) creates a single new conscious agent that we call
their undirected combination. It is straightforward to extend the
construction in Theorem 1 to the case in which more than
two conscious agents have an undirected join. In this case the
joined agents create a single new agent that is their undirected
combination.
Theorem 2.(Directed Join Theorem.) A directed join of two
conscious agents creates a new conscious agent.
Proof .(By construction.) Let two conscious agents
C1=((X1,X1),(G1,G1),P1,D1,A1,N1),(30)
www.frontiersin.org June 2014 | Volume 5 | Article 577 |11
Hoffman and Prakash Objects of consciousness
and
C2=((X2,X2),(G2,G2),P2,D2,A2,N2),(31)
have the directed join C1C2.Let
C=((X,X),(G,G),P,D,A,N)) (32)
where
X=X1,(33)
G=G2,(34)
P=P1,(35)
D=D1A1D2:X1×G2→[0,1],(36)
A=A2,(37)
N=N1=N2,(38)
where D1A1D2denotes kernel composition. Then Csatisfies the
definition of a conscious agent.
Thus, the directed join of two conscious agents creates a single
new conscious agent that we call their directed combination.Itis
straightforward to extend the construction in Theorem 2 to the
case in which more than one conscious agent has a directed join
to C2. In this case, all such agents, together with C2,createanew
agent that is their directed combination.
GivenTheorems1and2,wemakethefollowing
Conjecture 3:(Combination Conjecture.) Given any pseu-
dograph of conscious agents, with any mix of directed and
undirected edges, then any subset of conscious agents from the
pseudograph, adjacent to each other or not, can be combined to
create a new conscious agent.
How do these theorems address the problems of combining
experiences and subjects? We consider first the combination of
experiences.
Suppose C1has a space of possible perceptual experiences X1,
and C2has a space of possible perceptual experiences X2.Then
their undirected join creates a new conscious agent Cthat has
a space of possible perceptual experiences X=X1×X2.Inthis
case, Chas possible experiences that are not possible for C1or
C2. If, for instance, C1can see only achromatic brightness, and
C2canseeonlyvariationsinhue,thenCcan see hues of varying
brightness. Although C’s possible experiences Xare the Cartesian
product of X1and X2,neverthelessCmight exhibit perceptual
dependence between X1and X2, due to feedback inherent in an
undirected join (Maddox and Ashby, 1996; Ashby, 2000).
For a directed join C1C2, the directed-combination agent
C has a space of possible perceptual experiences X=X1.This
might suggest that no combination of experiences takes place.
However, Chas a decision kernel Dthat is given by the kernel
product D1A1D2. This product integrates (in the literal sense of
integral calculus) over the entire space of perceptual experiences
X2, making these perceptual experiences an integral part of the
decision process. This comports well with evidence that there is
something it is like to make a decision (Nahmias et al., 2004;
Bayne and Levy, 2006), and suggests the intriguing possibility that
the phenomenology of decision making is intimately connected
with the spaces of perceptual experiences that are integrated in
the decision process. This is an interesting prediction of the for-
malism of conscious agents, and suggests that solution of the
combination problem for experience will necessarily involve the
integration of experience with decision-making.
We turn now to the combination of subjects. Coleman
describes subjects as follows: “The idea of being a subject goes
with being an experiential entity, something conscious of phe-
nomenal qualities. That a given subject has a particular phe-
nomenological point of view can be taken as saying that there
exists a discrete ‘sphere’ of conscious-experiential goings-on cor-
responding to this subject, with regard to which other subjects are
distinct in respect of the phenomenal qualities they experience,
and they have no direct (i.e., experiential) access to the qualitative
field enjoyed by the first subject. A subject, then, can be thought of
as a point of view annexed to a private qualitative field” (Coleman,
2014).
A conscious agent Ciis a subject in the sense described by
Coleman. It has a distinct sphere, Xi, of conscious-experiential
goings-on” and has no direct experiential access to the sphere, Xj,
of experiences of any other conscious agent Cj. Moreover, a con-
scious agent is a subject in the further sense of being an agent, i.e.,
making decisions and taking actions on its own. Thus, according
to the theory being explored here a subject, a point of view, is a
six-tuple that satisfies the definition of a conscious agent.
The problem with combining subjects is, according to Goff,
that “It is never the case that the existence of a number (one or
more) of subjects of experience with certain phenomenal char-
acters a priori entails the existence of some other subject of
experience” (Goff, 2009).
Coleman goes further, saying that “The combination of sub-
jects is a demonstrably incoherent notion, not just one lacking in a
priori intelligibility ...”(Coleman, 2014). He explains why: ...a
set of points of view have nothing to contribute as such to a single,
unified successor point of view. Their essential property defines
them against it: in so far as they are points of view they are expe-
rientially distinct and isolated—they have different streams of
consciousness. The diversity of the subject-set, of course, derives
from the essential oneness of any given member: since each sub-
ject is essentially a oneness, a set of subjects are essentially diverse,
for they must be a set of onenesses. Essential unity from essential
diversity ...is thus a case of emergence ...
The theory of conscious agents proposes that a subject, a point
of view, is a six-tuple that satisfies the definition of conscious
agent. The directed and undirected join theorems give construc-
tive proofs of how conscious agents and, therefore, points of view,
can be combined to create a new conscious agent, and thus a
new point of view. The original agents, the original subjects, are
not destroyed in the creation of the new agent, the new sub-
ject. Instead the original subjects structurally contribute in an
understandable, indeed mathematically definable, fashion to the
structure and properties of the new agent. The original agents are,
indeed, influenced in the process, because they interact with each
other. But they retain their identities. And the new agent has new
properties not enjoyed by the constituent agents, but which are
intelligible from the structure and interactions of the constituent
Frontiers in Psychology | Perception Science June 2014 | Volume 5 | Article 577 |12
Hoffman and Prakash Objects of consciousness
agents. In the case of undirected combination, for instance, we
have seen that the new agent can have periodic asymptotic prop-
erties that are not possessed by the constituent agents but that are
intelligible—and thus not emergent in a brute sense—from the
structures and interactions of the constituent agents.
Thus,inshort,thetheoryofconsciousagentsprovidesthefirst
rigorous theoretical account of the combination of subjects. The
formalism is rich with deductive implications to be explored. The
discussion here is just a start. But one hint is the following. The
undirected combination of two conscious agents is a single con-
scious agent whose world, W, is itself. This appears to be a model
of introspection, in which introspection emerges, in an intelligible
fashion, from the combination of conscious agents.
MICROPHYSICAL OBJECTS
We have sketched a theory of subjects. Now we use it to sketch a
theory of objects, beginning with the microscopic and proceeding
to the macroscopic.
The idea is that space-time and objects are among the sym-
bols that conscious agents employ to represent the properties and
interactions of conscious agents. Because each agent is finite, but
the realm of interacting agents is infinite, the representations of
each agent, in terms of space-time and objects, must omit and
simplify. Hence the perceptions of each agent must serve as an
interface to that infinite realm, not as an isomorphic map.
Interacting conscious agents form dynamical systems, with
asymptotic (i.e., long-term) behaviors. We propose that micro-
physical objects represent asymptotic properties of the dynamics
of conscious agents, and that space-time is simply a convenient
framework for this representation. Specifically, we observe that
the harmonic functions of the space-time chain that is associated
with the dynamics of a system of conscious agents are identical to
the wave function of a free particle; particles are vibrations not of
strings but of interacting conscious agents.
Consider, for concreteness, the system of two conscious agents
of section Dynamics of Two Conscious Agents, whose dynam-
icsisgovernedbythekernelLof (17). This dynamics is clearly
Markovian, because the change in state depends only on the cur-
rent state. The space-time chain associated to Lhas, by definition,
the kernel
Q:(E×N)×(E2N)→[0,1],(39)
given by
Q((e,n),A×{m})=L(e,A)if m=n+1,
0,otherwise,(40)
where e E, n,m N,andAE(Revuz, 1984).
Then it is a theorem (Revuz, 1984) that, if Qis quasi-compact
(this is true when the state space is finite, as here), the asymptotic
dynamics of the Markov chain takes on a cyclical character:
There are a finite number of invariant events or absorbing sets:
once the chain lands in any of these, it stays there forever. And
the union of these events exhausts the state space E.Wewill
index these events with the letter ρ.
Each invariant event ρis partitioned into a finite number dρof
“asymptotic” events, indexed by ρand by δ=1, ...,dρ,so
that once the chain enters the asymptotic event δ,itwillthen
proceed, with certainty, to δ+1, δ+2, and so on, cyclically
around the set of asymptotic events for the invariant event ρ.
Then there is a correspondence between eigenfunctions of
Land harmonic functions of Q(Revuz, 1984, p. 210)
We let
λρ,k=exp(2iπk/dρ),(41)
and
fρ,k=
dρ
δ=1
(λρ,k)δUρ,δ (42)
where ρis the index over the invariant events (i.e., absorbing sets),
the variable kis an integer modulo dρ,andUρ,δ is the indicator
function of the asymptotic event with index ρ,δ. For instance,
in the example of section First Example of Asymptotic Behavior,
there are 6 absorbing sets, so ρ=1,2,...,6. The first absorbing
set has only one state, so d1=1. Similarly, d2=1, d3=2, d4=
d5=d6=4. The function U1,1has the value 1 on the state |0000
and 0 for all other states; U5,3has the value 1 on the state |1100
and 0 for all other states.
Then it is a theorem that
Lfρ,k=λρ,kfρ,k,(43)
i.e., that fρ,kis an eigenfunction of Lwith eigenvalue λρ,k,and
that
gρ,k(·,n)=(λρ,k)nfρ,k,(44)
is Q-harmonic (Revuz, 1984). Then, using (41–42), we have
gρ,k(·,n)=exp(2iπk/dρ)n
dρ
δ=1
exp(2iπk/dρ)δUρ,δ
=
dρ
δ=1
exp(2iπkδ
dρ
2iπkn
dρ
)Uρ,δ
=
dρ
δ=1
cis(2πkδ
dρ
2πkn
dρ
)Uρ,δ
=
dρ
δ=1
cis(2πδ
dρ,k
2πn
dρ,k
)Uρ,δ (45)
where dρ,k=dρ/k. This is identical in form to the wavefunction
of the free particle (Allday, 2009, §7.2.3):
ψ(x,t)=A
x
cis(2πx
λ2πt
T)|x(46)
www.frontiersin.org June 2014 | Volume 5 | Article 577 |13
Hoffman and Prakash Objects of consciousness
ThisleadsustoidentifyA1, Uρ,δ |x,δx,n t,and
dρ,kλ=T.Thenthemomentumoftheparticleisp=h/dρ,k
and its energy is E=hc/dρ,k,wherehis Planck’s constant and c
is the speed of light.
Thus, we are identifying (1) a wavefunction ψof the free par-
ticle with a harmonic function gof a space-time Markov chain
of interacting conscious agents, (2) the position basis |xof the
particle with indicator functions Uρ,δ of asymptotic events of the
agent dynamics, (3) the position index xwith the asymptotic state
index δ, (4) the time parameter twith the step parameter n,(5)
the wavelength λand period Twith the number of asymptotic
events dρ,kin the asymptotic behavior of the agents, and (6) the
momentum pand energy Eas functions inversely proportional
to dρ,k.
Note that wavelength and period are identical here: in these
units, the speed of the wave is 1.
This identification is for non-relativistic particles. For the rel-
ativistic case we sketch a promising direction to explore, starting
with the dynamics of two conscious agents in an undirected join.
In this case, the state of the dynamics has six components: N1,
N2,X1,X2,G1,G2.Weidentifythesewiththegeneratingvectors
of a geometric algebra
(2, 4) (Doran and Lasenby, 2003). The
components N1and N2have positive signature, and the remain-
ing have negative signature.
(2, 4) is the conformal geometric
algebra for a space-time with signature (1, 3), i.e., the Minkowski
space of special relativity. The conformal group includes as a
subgroup the Poincare group of space-time translations and rota-
tions; but the full conformal group is needed for most massless
relativistic theories, and appears in theories of supersymmetry
and supergravity. The Lie group SU(2, 2) is isomorphic to the
rotor group of
(2, 4), which provides a connection to the
twistor program of Roger Penrose for quantum gravity (Penrose,
2004).
Thus, the idea is to construct a geometric algebra
(2, 4) from
the dynamics of two conscious agents, and from this to con-
struct space-time and massless particles. Each time we take an
undirected join of two conscious agents, we get a new geometric
algebra
(2, 4) with new basis vectors as described above. Thus,
we get a nested hierarchy of such geometric algebras from which
we can build space-time from the Planck scale up to macroscopic
scales. The metric