Content uploaded by Federico Zilio
Author content
All content in this area was uploaded by Federico Zilio on May 16, 2019
Content may be subject to copyright.
author proofs
CHAPTER SIX
(NEVER)MINDING THE GAP?
INTEGRATED INFORMATION THEORY
AND PHILOSOPHY OF CONSCIOUSNESS
Federico Zilio1
University of Padua
Abstract: The aim of the article is to discuss the strengths and weaknesses
of the Integrated Information Theory of consciousness and challenge it
through contemporary issues in philosophy of mind and phenomenology.
I argue that some objectivist theories of consciousness underestimate the
constitutive role of the subjective perspective and seem to face the same
problems of the dualism that contemporary sciences would like to avoid.
IIT faces the hard problem of consciousness from the axioms of
experience to the postulates of its physical substrate and considers the
phenomenal aspect not as an illusory property to be reduced, rather as the
theoretical starting point of the research. The aim of IIT is to account for
both the quantity and quality of consciousness in a non-reductive way.
However, despite the potential relevance in the empirical domain, this
theory presents some theoretical limitations, which are here discussed
from a metaphysical, epistemological and phenomenological perspective.
Based on this critical discussion it will be suggested to recalibrate IIT in
order to redefine its ontological and epistemological grounds.
Keywords: Integrated Information Theory; Consciousness; Hard Problem;
Phenomenology; Brain.
1
PhD Student in Philosophy, Department of Philosophy, Sociology, Education and
Applied Psychology, University of Padua (Italy).
author proofs
104 Chapter Six
1. The idea of a “science of consciousness”
Cognitive science – in particular, some branches of cognitive and
computational neuroscience- has become progressively relevant in recent
studies on consciousness. During the nineteenth century, psychology and
neuroscience provided new methodologies to study the mind. Nowadays,
consciousness is of interest not only for philosophers but also for
neuroscientists. With the recent neuroscientific revolution, we can now
study the mind by studying the brain. Thus, for a philosopher it is now
problematic or even impossible to study consciousness without
considering also empirical findings.
The concept of consciousness, as it is known through common sense or
even in the contemporary debate, is not an original and primitive concept,
rather it depends on some conceptual mutation and epistemological
revolution in the history of philosophy and science. It should be noted that
the Ancient Greek (cf. Onians, 1954) and the Medieval philosophy are
inevitably relevant for the understanding of our conceptual and
theoretical roots. However, I claim that the turning point to consider for
the analysis of the concept of consciousness is the modern era, especially
two key figures. In the seventeenth century, Galileo Galilei and Renè
Descartes respectively laid the foundations for the scientific method and
modern philosophy. This caused a reconceptualization of the relation
between subjectivity and objectivity, producing a conceptual distinction
that turned out useful both for scientific progress and philosophical
research.
Galilei developed the scientific method, based on some ontological
assumptions on the possibility to find a thought-world correspondence
through the “mathematization of nature” (Galilei, 1995). On the other
hand, Descartes based his studies on a mechanistic view of the world, in
which he included not only the external nature but also the whole human
organism and its functionalities − e.g. sensorial and motor processes,
memory, digestive system, etc. (Descartes, 1966). These scientific and
philosophic revolutions produced a reformulation of the relation between
subject and object, which excluded the former from the domain of
science: in order to be scientific, the domain of science must be
constitutively objective − i.e. a-subjective (Whitehead, 2011; Husserl,
1970). Consequently, consciousness − in light of its phenomenal and
qualitative states, hence its subjective nature − was excluded from the
domain of modern science. In recent times, however, the cognitive
sciences are progressively trying to fill this gap, revealing new insights on
the peripheral and subordinate brain function, and moving towards an
increasingly naturalistic view of the mind and consciousness (Dennett,
1991; Edelman, Tononi, 2000). One of the most discussed contemporary
theories of consciousness is the Integrated
author proofs
(Never)Minding the Gap? 105
Information Theory (IIT), formulated principally by Giulio Tononi, Christof
Koch and Marcello Massimini.
The aim of this article is to describe and discuss the strengths and
weaknesses of this theory and challenge it in light of contemporary issues in
philosophy of mind and phenomenology. Before discussing the theory, to
understand its philosophical underground, it is better to consider first some
epistemological questions around the study of the mind and the brain.
In the evergreen debate on the relationship between mind and brain, it
is possible to distinguish four different approaches.
2
The first approach is
the mind-based approach, which puts the mental as at the core of the
analysis by addressing the question “how do the mind and its mental
features relate to the brain?”. The second approach is the brain-based one,
which overturns the former question into “how do the brain and its neural
features relate to the mind?”. The former approach entails the
assimilation of mental concepts into neuronal structures, while the latter
does not establish the correlation of any mental feature to a neural one;
rather, a brain-based approach starts from considering the brain as such
and its relationship with the body and the environment, and it only
secondarily tries to find the correlation between some neural mechanism
or activity and some mental feature or cognitive function. Both two
methodological approaches can lead to reductive consequences, such as
the brain-reductive approach – i.e. “how to reduce the mind to the brain
as the final goal”. Furthermore, from the same premises it is possible to
develop a mind-reductive approach – i.e. “how to reduce the brain to the
mind as the final goal”. This classification is merely methodological,
therefore in the mind-based approach we can also find dualism as well as
functionalism and non-reductive physicalism, while non-reductive
neurophilosophy and the 4E cognition (extended, embedded, embodied,
enactive) can be found also in the brain-based one. There are also some
positions that take the mind into account as a negative template, with the
aim of reducing it to cerebral features – e.g. epiphenomenalism, identity
theory and eliminativism.
2. The Integrated Information Theory
I claim that IIT could be included in the mind-based approaches because
it addresses the problem of consciousness starting from the experience
itself, i.e. from the identification of the essential and self-evident
2
For the first three approaches, see Northoff, 2014.
author proofs
106 Chapter Six
properties of experience and heading towards defining the properties of
the physical substrates of consciousness ( Tononi et al., 2016; Tononi,
2017a). Tononi and colleagues define five “axioms” which describe our
phenomenal existence:
•Intrinsic existence: my existence is real and intrinsic for me
and I can immediately be sure of it.
•Composition: experience is structured of various parts,
qualities and objects.
•Information: any experience gives me specific information
that differs from others.
•Integration: my experience is unitary, because it gives me a
unique scene as a whole.
•Exclusion: consciousness is definite, in content and spatio‐
temporal grain, thus it excludes other possible experiences
at the same time.
Based on these axioms, IIT tries to give a mathematical expression of
them, corresponding and postulating the causal properties of the
substrate of consciousness, which must possess an intrinsic cause-effect
power to exist, and it must be composed of causal parts that specify a
defined structure. Also, it must be a unitary system, irreducible to its single
parts and defined within a spatio-temporal grain. When translating our
phenomenology into “postulates” of the physical substrate of
consciousness, it is important to analyse the identity between integrated
information and experience: if having experience means being conscious,
and if integrated information is a kind of causal property within a physical
system, we can then account for the translation of axioms of experience
into postulates of the physical substrate.
According to IIT, experience is the capacity of a system to integrate
information, that is, to discriminate among a set of possible states
3
. In
other words, integrated information is a measure of the cause-effect power
of a physical system (Oizumi et al., 2014): the system can be a set of
possible states within an internal structure with causal-effect relations,
which determine the system itself. In a system there can be a complex – a
set of elements with causal power – that generates a local maximum of
integrated information (Ф
max
). A set of these elements shapes a “concept”
3
It is clear that, with this definition, according to IIT the concept of conscious
experience is not used only for human beings or animals, but, as we shall see later,
it is possible to extend it also to artificial and other natural systems.
author proofs
(Never)Minding the Gap? 107
of integrated information, that is impossible to subdivide due to the
irreducibility of its cause-effect relations.
4
In turn, an irreducible and
indivisible structure of these concepts is called “conceptual structure” that
composes the experience as such: the configuration of the structure
defines the quality, while the level is defined by the quantity of integration.
Hence the definition of the identity between the starting phenomenal
experience and integrated information generated by the maximally
irreducible conceptual structure:
The maximally irreducible conceptual structure (MICS)
generated by a complex of elements is identical to its
experience. The constellation of concepts of the MICS
completely specifies the quality of the experience (its quale
sensu lato (in the broad sense of the term)). Its irreducibility
WMax specifies its quantity. The maximally irreducible cause-
effect repertoire (MICE) of each concept within a MICS specifies
what the concept is about (what it contributes to the quality of
the experience, i.e. its quale sensu stricto (in the narrow sense of
the term)), while its value of irreducibility QMax specifies how
much the concept is present in the experience. An experience is
thus an intrinsic property of a complex of mechanisms in a
state” (Oizumi et al., 2014: 3).
It is important to note that this is not a direct identity between the
physical and the experience, but between the experience and the
conceptual structure of integrated information that is generated by the
physical substrate. This leads to the possibility to measure experience in
bits of integrated information.
2.1. The concept of Information in IIT
Before describing in further details IIT’s explanatory power, it is essential
to analyse the concept of information used here, to avoid possible
misunderstandings. The concept of information adopted by IIT is basically
4
“Irreducibility” means that all the components of the system must be impossible
to subdivide eliminating their reciprocal relations without losing the structure of
the system itself in other sub-systems. As well as “seeing a red triangle is irreducible
to seeing a triangle but no red color, plus a red patch but no triangle” (axiom of
integration), so too “A mechanism can contribute to consciousness only if it
specifies a cause-effect repertoire (information) that is irreducible to independent
components” (Oizumi et al., 2014: 3).
author proofs
108 Chapter Six
the opposite of Shannon’s theory of information (Shannon, 1948): the
Shannonian concept of information can be regarded as a neutral message
transmitted across a channel, and consequently with an extrinsic nature,
which means that it needs an external observer in order to acquire
meaning in the input-output system; without it, information is completely
detached from any kind of meaningful interpretation and the theory is
supposed only to assess the efficiency of the transmission. Tononi
acknowledges a possible comparison between Galilei’s scientific
revolution and Shannon’s theory of information: “Just as science
flourished once Galileo removed the observer from nature,
communication and storage of data exploded once Shannon removed
meaning from information” (Tononi, 2012: 145). However, according to
Tononi, for a comprehensive science of consciousness these
epistemological revolutions must be – I would say – fixed or completed: in
virtue of the strong correlation between experience and information, and
the claim that the subjective experience needs to be reintroduced into the
objective domain of contemporary sciences, similarly the concept of
information needs to regain the meaning, but from an intrinsic
perspective. In Tononi’s own words: “information integrated by the causal
powers of a mechanism inside a system, from the system’s “intrinsic”
perspective, acquires meaning–in fact, it becomes meaning. And the
observer is returned to nature” (Tononi, 2012: 145).
Therefore, this conception of information can be understood as non-
Shannonian, as it is not only marked by meaning, it is meaning. The
meaningful information is characterized by “differences that make a
difference” (Koch, Tononi, 2013) from the intrinsic perspective of a system.
In other words, from the cause-effect relations among all past and future
states within the system, without any external addition of meaning. Hence
the difference made by the mechanism is not just a reduction of
uncertainty (Shannonian information as communication capacity), but as
specification – i.e. what it takes to transform something and make it into
something else (information as “giving a particular form”, in the classic
sense of the word, from the Latin verb in-formare) (Tononi et al., 2016). In
this way, the moment we measure integrated information in a system, we
are actually making explicit the intrinsic phenomenal capacity of the
conceptual structures specified by the physical substrate of
consciousness, that generates (or better, that directly “is”) experience.
2.2. Explanatory and predictive power of IIT
In order to pursue its aims, IIT must possess a predictive and explanatory
power, since if the physical substrate changes, the conceptual structure and
author proofs
(Never)Minding the Gap? 109
our experience, in turn, will change (Tononi, 2017a). For example, more
complexity in the corticothalamic system likely means more integrated
information, and consequently more conscious experience.
5
According to
IIT, low integrated information is expected to correlate with a loss of
consciousness – e.g. in vegetative state, minimally conscious state or general
anaesthesia – due to the disruption of brain’s capacity to create conceptual
structures of integrated information (Rosanova et al., 2012). Differently,
patients in locked-in syndrome
6
should possess high integrated
information, similar to those of healthy people (Casali et al., 2013). Also,
during sleep the brain seems incapable of integrating information in a
conscious complex (Massimini et al, 2005), while in REM sleep there are
more widespread and structured patterns of cortical activation (Massimini
et al, 2010). Another example regarding the explanatory power of IIT, the
cerebellum, despite having more neurons than the rest of the entire brain,
should be excluded from the conceptual structure of consciousness,
because of its modular structure (Tononi, 2017a). Indeed, cerebellar seizures
usually produce no consequences in our conscious state as such.
Interestingly enough, IIT raises also some counterintuitive implications,
for example, that a photodiode that can distinguish between light and
dark, adjusting its own sensibility to photons, can be considered a
minimally conscious system (see below; Oizumi et al, 2014), while a
computer simulation of our behaviour and biophysics would fragment
itself in a series of minimally conscious mini-complexes (microchips), like
the modularity of the cerebellum does (Tononi, 2017a).
2.3. IIT and Philosophy
What makes IIT intriguing also for philosophy is its potential application to
some epistemological and ontological problems of consciousness. Besides
determining whether, how much and in which way a system is conscious
5
Note that the complexity of the system is not merely related to the level of the
neuronal activity. For example, “cortical neurons fire almost as much during deep
slow-wave sleep as during wakefulness, but the level of consciousness is much
reduced in the former condition. Similarly, in absence seizures, neural firing is high
and synchronous; yet consciousness is seemingly lost” (Tononi, 2005: 13). What
mainly matters is the shape of the informational relationships the system specifies.
6
Patients affected by this syndrome suffer from a paralysis of nearly all voluntary
muscles, except for vertical eye movements and/or blinking. However, they are
conscious, cognitively intact, and able to perceive the environment, but extremely
limited in interaction and communication: literally, their minds seem ‘locked’
inside their body (Gosseries et al, 2009).
author proofs
110 Chapter Six
(explanatory, predictive and inferential power; Tononi, 2017a), IIT may
represent a new approach to tackling the hard problem of consciousness
(Chalmers, 1995), asking why some physical processes are accompanied by
experience, or have something “it is like to be” in that state (Nagel, 1974). If
it seems impossible to squeeze consciousness from the physical, maybe it is
easier to explain the relationship starting from phenomenology, finding new
correlations on the basis of our experiential structures and not from the
long-standing research of the neuronal correlate of consciousness. Thus,
according to IIT the first-person perspective – i.e. the subjective way
through which the world is revealed to us – becomes the epistemic and
methodological starting point, not an emergent consequence to be
explained from the physical (Tononi et al., 2016).
Another problem raised by IIT is the explanatory gap argument, which
underscores the inability to bridging the epistemic gap between all the
functions described scientifically with an objective approach – e.g.
physics, neuroscience, biology, etc. – and the lived experience itself – e.g.
the world perceived from our subjective point of view (Levine, 1983): no
objective explanation can ever account for the experience itself in an
exhaustive way.
7
In some sense, IIT accepts the explanatory gap. The
description offered by IIT does not lead to the ontological reduction, but
at most to methodological reductionism (Tononi, 2017b): one of the aims
of the theory is to analyse the system starting from any single part of it, but
without forgetting that the system as a whole is ontologically more than
the mere sum of its part (Hoel, et al, 2016), due to all the intrinsic causal
relation among elements. However, a full description of all the conceptual
structures in the complex that generates integrated information is all there
is to say from a scientific perspective, about that precise experience and its
relationship with the physical substrate. This does not mean that there is
no gap, but that the description can never substitute the experience as
such. This is because experience is not a way of describing but a “way of
being” (Tononi, 2008).
Consequently, it is also possible to approach the knowledge argument and
state how IIT relate to it. Jackson’s knowledge argument talks about Mary
the neuroscientist who lives in a black and white laboratory, and she knows
7
“The explanatory gap argument doesn't demonstrate a gap in nature, but a gap in
our understanding of nature. Of course, a plausible explanation for there being a
gap in our understanding of nature is that there is a genuine gap in nature. But so
long as we have countervailing reasons for doubting the latter, we have to look
elsewhere for an explanation of the former” (Levine, 1999: 12).
author proofs
(Never)Minding the Gap? 111
everything about the science of colour, but she has never seen a colour. So,
when she finally escapes from the laboratory and sees the world, she knows
something new about colours. Jackson developed this argument for
claiming, against physicalism, that conscious experience involves non-
physical entities and properties. We can distinguish between two main
versions of the argument: the weak-epistemological one – i.e. “there is a
kind of knowledge which is not concerning physical” – and the strong-
ontological one – i.e. “conscious experience involves non-physical
properties” (Horgan, 1984). Tononi points out that “[t]he argument loses
its strength the moment one realizes that consciousness is a way of being
rather than a way of knowing” (Tononi, 2008: 234). Indeed, according to
IIT, conscious experience is neither an extra-physical knowledge – see the
explanatory gap – neither an extra-physical object – see the identity
between conceptual structure and experience.
3. Analysis from multiple perspectives
After having outlined some main features of IIT, I shall analyse and discuss
some possible problems for the theory and make some observations from
multiple levels – i.e. metaphysical, epistemological and phenomenological.
8
3.1. Metaphysical observations
From a metaphysical perspective, we can draw some provisional
considerations about the theory. IIT is not a classical type-identity theory
between mental and physical, given that the same experience may be
supported by two different physical substrates – e.g. both anaesthesia and
general seizure lead to loss of consciousness (Tononi, 2017a). I argue also
that it cannot even be defined as anomalous monism, because IIT
concretely tries to provide postulates and laws for describing consciousness,
while Davidson’s theory refuses the possibility of any psychophysical law
(Davidson, 1970). Furthermore, IIT seems not completely definable as a
functionalist theory, because, “whether a system is conscious or not cannot
be decided based on its input-output behaviour” (Oizumi et al., 2014: 21).
Indeed, IIT accepts the possibility of a perfect feed-forward simulation of
human being that would remain unconscious. However, it can be argued
8
There were also some mathematical and empirical issues, but, given that I do not
have mathematical expertise to discuss, I briefly expose them here: for a critique of
the computational structure of the theory, see Aaronson, 2014; for the reply, see
Tononi, Koch, 2016. For a critique of the inconsistency of the mathematical
definition of Ф, see Aaronson, 2014; for the reply by Tegmark, see Horgan, 2015.
author proofs
112 Chapter Six
that Tononi might misunderstand the meaning of the well-known
philosophical zombie argument (Chalmers, 2002), because the identity
relation between humans and the unconscious zombie is not only
functional but also completely physical. Nevertheless, although IIT refuses
functionalism, it seems vulnerable to the main arguments against
functionalism, namely the “fading” and “dancing” qualia arguments
(Cerullo, 2015). Therefore, we may consider IIT – through a kind of
abductive reasoning – as an elegant version of functionalism.
There is also the possibility to consider IIT as a token-identity theory,
that is a particular relation, not between physical and consciousness – i.e.
types – but between “that” particular conscious state and “that” particular
conceptual structure – i.e. tokens. Another fact to consider is that IIT
accepts multiple realizability that is a common factor between the
identity theory and functionalism: the brain is not necessary, indeed, it
could be substituted by another physical substrate. However, generating
integrated information from a maximally irreducible conceptual structure
is not so easy for an artificial system or something different from a brain –
e.g. robots, computer, etc. (Tononi, 2017a). A feed-forward system is not
sufficient for generating integrated information (Oizumi et al., 2014).
Furthermore, I claim that it is possible to glimpse through the corollaries
of the theory the shadow of panpsychism, given that IIT has no minimum
threshold for the determination of what is consciousness and what is not;
even bacteria, transistors, minerals and atoms could be conscious in some
way, it depends on whether they can be interpreted as systems with
sufficient integrated information (Tononi, 2008; Tononi, Koch, 2015).
Therefore, we can also define IIT as quasi-panpsychism, “partial pan-
experientialism” (Cerullo, 2015: 8) or as an elaborate version of the
classical naïve panpsychism (Koch, 2012).
We can try to summarize all these hypotheses about the metaphysical
ground of IIT through a comparison made by Matteo Grasso between the
theory and various metaphysical accounts examined by David Chalmers
(Grasso, 2013; Chalmers, 2002). Grasso notes that IIT rejects Type-A
materialism such as reductive physicalism, because, as already explained,
the identity is not between consciousness and its physical substrate. On
the other side, IIT has some connections with Type-B materialism – the
supervenience version – because it accepts the explanatory gap without
entailing any ontological gap: indeed, according to this metaphysical
account, the phenomenal is not identical to the physical, but supervenes
from it and totally depends on it. In other words, the relationship between
phenomenal and physical is a “necessary a posteriori truth” like “water”
and “H
2
O” (Kripke, 1971). However, supervenience was recently refused by
author proofs
(Never)Minding the Gap? 113
Tononi, since it leaves no room for causal emergence or mental causation
(Tononi, 2017b). For the same reason, IIT is not compatible with Type-E
dualism – i.e. epiphenomenalism that considers mental states to be
devoid of causal efficacy. Another possible metaphysical account that
could account for IIT is non-reductive monism; however, according to this
kind of Type-F monism, consciousness is a fundamental property of the
monistic substance of all reality as well as physical; hence it accepts the
ontological gap. Therefore, the ontological gap assumed in the non-
reductive monism seems incompatible with the previous discussion about
IIT and the knowledge argument.
In addition to all these metaphysical positions, I would propose another
possibility, arguing that IIT could entail a concept of identity not as
equation, but as the asymmetric relation of composition (Place, 1956).
From here, the relation between conscious experience and integrated
information would be similar to the sentence “lightning is (composed by)
electrical discharge, but not vice versa; clouds are (composed by) water
molecules, but not vice versa”. Similarly, we can say that “conscious
experience is composed by integrated information in a complex”, but at
the same time there is no sense in saying that “integrated information in a
complex is composed by conscious experience”.
3.2. Epistemological observations
From the epistemological perspective, there are some issues regarding the
explanatory power of IIT, its conception of information itself, the hard
problem cited before, and the conflict between the theory and common
sense. Michael Cerullo criticized the explanatory power of IIT, proposing a
faux theory – the Circular Coordinated Message Theory – that should
possess the same explanatory and prediction power as IIT – e.g. about the
split-brain patients and the cerebellum.
9
However, it is interesting to note
that CCMT is purely based on an arbitrary potential property of
consciousness – i.e. the self-evident property that consciousness is related
to information traveling in feedback loops within a system (Cerullo, 2015).
9
“CCMT makes the same predictions as IIT for split-brain syndrome and the
cerebellum. In split-brain syndrome CCMT predicts that two separate conscious
systems emerge because each brain hemisphere contains significant cortico-
thalamic loops. Because there is a large redundancy between the two hemispheres,
their Omicron value is not greatly reduced compared to when they form a single
complex. CCMT predicts that the cerebellum is not conscious because its modular
author proofs
114 Chapter Six
For this reason, according to Cerullo IIT is still too abstract and needs
more empirical support for avoiding triviality.
10
Regarding the concept of information used in IIT, Searle has doubts
about the consistency of the notion itself (Searle, 2013). According to him,
information is always an observer-dependent phenomenon; therefore it
cannot be used to explain consciousness, which is observer-independent.
In other words, given that the meaning of information is always in the eye
of a conscious beholder, we need consciousness in order to define the
notion of information itself. Searle’s conception is very similar to
Shannon’s notion of information (Mindt, 2017). In his view, information is
not information until it is “read” by an entity with a mind. There may be
messages in the information carrier, but it becomes information once
read. Therefore, explaining consciousness through information would
lead to a petitio principii (Searle, 1998). Koch and Tononi reply that, even
though Searle is right about the observer-independent nature of
consciousness, IIT uses a different conception of information, as
explained above, which can be measured as “differences that make a
difference” to a system from its intrinsic perspective (Koch, Tononi, 2013);
this perspective cannot be observer-relative, by definition. However, in the
same article Searle replies, stressing the point that, except for thinking
conscious agent (observer-independent), all information is observer-
relative.
11
Analyzing the difference between conscious and unconscious
beings is one of the main issues to solve in order to define the relationship
design and lack of lateral connections among its basic modules make it ill-suited for
circular information flow” (Cerullo, 2015: 5).
10
From an empirical perspective, the integrated information value, at the moment,
has been computed only for abstract systems, that probably behave differently from
a concrete physical system (Barrett, 2015; Tononi, 2017a). However, until now, it
seems that the theory provides a heuristic value for interpreting integrated
information through some practical measurement, like the perturbational
complexity index (Barrett, 2015; Casali et al., 2013).
11
“[…] consciousness is an intrinsic feature of certain human and animal nervous
systems. The problem with the concept of “information processing”, is that
information processing is typically in the mind of an observer. For example, we treat
a computer as a bearer and processor of information, but intrinsically, the
computer is simply an electronic circuit […] any system at all can be interpreted as
an information processing system. The stomach processes information about
digestion, the falling body processes information about time, distance, and gravity.
And so on. The exceptions to the claim that information processing is observer-
relative are precisely cases where some conscious agent is thinking” (Searle, 1998:
1941-2).
author proofs
(Never)Minding the Gap? 115
between information and the observer. On the contrary, assuming from
the beginning the relation between information and consciousness would
clearly determine the results of the research. In this respect, I argue that
insisting without any strong empirical evidence that photodiodes, atoms,
molecules and some artificial objects can be conscious, and they can
consequently possess observer-independent integrated information, is a
mere postulation. Indeed, moving from the identity between
consciousness and integrated information as a starting assumption – and
not as a result – would inevitably prejudice the ontological commitment of
the theory, e.g. leading to a panpsychist account of nature.
In relation to Chalmers’ hard problem of consciousness, it is true that IIT
is construed as an attempt to crack this problem (Tononi et al., 2016;
Mindt, 2017), but I argue that it may not be able to solve it. First, there
seems to be a misunderstanding of the hard problem as such, because it
does not matter if the problem is approached from the phenomenology to
the brain or vice versa. Instead, the hard problem raises the question
about “why” – not “how” – any physical process or information generates
(or is) consciousness. According to Cerullo, IIT does not solve either the
so-called easy problems,
12
but at most it may be considered as a theory of
proto-consciousness, as dissociated from intelligence, self, body,
perception, and so forth (Cerullo, 2015).
13
It is more likely that IIT was
efficient towards the so-called “real problem”, which raises the question of
“how to account for the various properties of consciousness in terms of
biological mechanisms, without pretending it doesn’t exist (easy problem)
and without worrying too much about explaining its existence in the first
place (hard problem)” (Seth, 2016). Furthermore, according to Garrett
Mindt, even though IIT seems to overcome the ontological gap, the
explanatory gap remains an obstacle for the solution of the hard problem:
even though we can go from phenomenology to the physical, the hard
problem is still unsolved from the physical to phenomenology (Mindt,
12
“The easy problems of consciousness are those that seem directly susceptible to
the standard methods of cognitive science, whereby a phenomenon is explained in
terms of computational or neural mechanisms” (Chalmers, 1995: 201).
13
In fact, this is what Tononi claims about consciousness: “consciousness – in the
sense of having an experience – does not require sensorimotor loops involving the
body and the world, does not require language, introspection or reflection, can do
without spatial frames of reference and perhaps even without a sense of the body
and the self, and does not reduce to attention or memory” (Tononi, 2009: 376). See
also the double dissociation between consciousness and intelligence (Tononi,
2017b).
author proofs
116 Chapter Six
2017). Hence, I would propose again the interpretation of identity between
consciousness and integrated information as asymmetrical composition:
“Phenomenal consciousness is composed by the conceptual structure of a
physical substrate, but not vice versa”. It seems to me the only way to
better understand the metaphysical ground behind this theory.
I would add another issue for IIT – related to the hard problem of
consciousness – the so-called “pretty hard problem”, formulated by Scott
Aaronson. The problem addresses the question of which physical systems
are conscious and which are not and demands a theory that can match
our intuitions about which systems are conscious. It is quite clear that this
last request seems to produce a non-scientific problem, since science
often generates counterintuitive theories. Why should a theory ask for a
match with our intuitions? For example, the concept of time in the theory
of relativity is very different from our phenomenal idea of it: according to
the theory of relativity, time is not absolute and linear; moreover it can
dilate in particular conditions, completely diverging from the idea of time
as an absolute container or a unidirectional arrow on which events are
put. Besides, the time of the theory of relativity is a concept very different
from our phenomenological way to perceive time, e.g. according to
Husserl’s phenomenology, through retention, protention and duration.
However, it seems impossible to avoid the way in which we perceive time
through our subjectivity and common sense and substituting it with the
scientific notion of time in our daily life.
The same argument may be used for consciousness, because it seems to
be an unavoidable semantic content from our common sense and folk
psychology that defines consciousness as a primitive concept of our
phenomenal life – i.e. consciousness means, partly, what humans have, and
a DVD recorder has not.
14
This criticism does not mean that a photodiode or
a DVD recorder cannot absolutely have integrated information – this would
14
“The crucial point is that the word “consciousness,” as pretty much everyone uses
it, is defined largely by reference to our own example. We don’t have access to some
separate, human-independent definition of consciousness, which would allow us
even to frame the question of whether it’s possible that toasters are conscious
whereas humans are not. By analogy, imagine 19
th
-century scientists built a
thermometer that delivered the result that boiling water was colder than ice. The
possibility that that was true wouldn’t even merit discussion—it would be
immediately rejected in favor of an obvious alternative, that the thermometer was
simply a bad thermometer, since it failed to capture our pre-theoretic notion of
what temperature is even supposed to mean, which concept includes boiling water
having a higher temperature than ice” (Aaronson, 2014).
author proofs
(Never)Minding the Gap? 117
require another assumption – but that the identity between information and
consciousness is merely postulated. IIT starts from our subjective
experience in order to account for this identity, but suddenly it produces
predictions and descriptions that are highly in contrast with our common
sense. It seems that IIT applies a double standard for common sense,
because Tononi appeals to common sense when it suits IIT’s purposes – e.g.
when they start from “phenomenological axioms” and normal states of
consciousness for developing the structure of the theory. However, he denies
the value of common sense for cases such as the conscious photodiode. I
would argue that, in some way, IIT risks to be immune to falsification.
Indeed, under normal conditions, if a theory claimed, for example, that a
DVD recorder can be highly conscious, this kind of prediction would be a
falsification, a reductio ad absurdum or at least a difficult point to defend for
any theory of consciousness (Searle, 2013). Hence, when IIT predicts that a
photodiode can be conscious, the burden of proof is entirely with the theory.
Regarding the example of the conscious photodiode, I would claim that
this might not be a good prediction of IIT, but, conversely, a problem for it.
According to IIT a simple photodiode without recurrent connections is
unconscious (Oizumi et al, 2014). We can compare it with the behavior of a
spiking neuron or a “shishi odoshi”, a typical water fountain used to
decorate Japanese gardens or as a deer-scarer: the photodiode indicates
the presence of light when it integrates sufficient photons, the shishi
odoshi makes sound when it is filled with water, the neuron integrates
synaptic input until the membrane potential reaches a threshold voltage
and then generates an action potential until the membrane potential is
reset.
15
All three have a simple input-output relation, without any
recurrent connection: the input affects the output element and then the
system return to its original state. For this reason, the elements here do
not form a complex and generate no quale. Instead, according to IIT a
photodiode with recurrent connection is minimally conscious; therefore it
possesses an intrinsic perspective on its system. The minimally conscious
photodiode consists of a detector and a predictor. The recurrent
connection between the detector and the predictor serves as a memory
feedback that increases the sensitivity of the photodiode for integrating
information. Suppose that the detector receives two external inputs (two
photons), the predictor serves as a memory feedback for effectively
15
For the comparison between the neuron and the shishi odoshi, see Helias et al
2011.
author proofs
118 Chapter Six
decreasing the threshold of the detector. The next time, one input will be
sufficient to activate the photodiode (Oizumi et al., 2014: 20).
I would argue that it is very arduous and counterintuitive to say that
even this photodiode with memory is minimally conscious, while a
biological structure like the cerebellum could not be conscious as a
system. Actually, the photodiode is not really different from a watermill.
The so-called predictor is a kind of inductor, an electronic component that
maintains some electric charge, i.e. the property of inductance: what they
call “memory” in the example of the photodiode is nothing more than
what could be called inductance or “electric inertia”. An equivalent
mechanism is the inertia produced by a watermill: the watermill can
distinguish the presence of water as well as the photodiode the presence
of light, and the inertia produced by the water increases the sensitivity of
the wheel as well as the inductance in the photodiode. Strong flowing
water moves the wheel, and the movement produces the moment of
inertia; consequently, the inertia increases the sensitivity of the wheel, in
order that the next time it will be sufficient a lower dose of water for
moving the wheel. However, at this point, it would be counterintuitive and
difficult to argue, within a scientific theory of consciousness, that a
watermill is conscious because it integrates information. In the best-case
scenario – this concept of consciousness is anything but the ordinary
concept we know, therefore it would be appropriate to consider IIT only as
a theory of integrated information, but not a theory of consciousness,
considering that the prediction of the conscious photodiode sounds more
like a problem, rather than a proof, for the consistency of IIT.
3.3. Phenomenological observation
Lastly, from a phenomenological perspective, IIT tries to fill some
epistemological and ontological gaps in the study of consciousness and
does not treat the phenomenal aspect of our experience as a property to
be reduced or eliminated (reductive physicalism or eliminative
materialism). However, the role of subjectivity is not clear. IIT attempts to
achieve the mathematization of our phenomenology, which is an
operation not so different from other approaches to consciousness that I
would call “objectualist” (Rowlands, 2001). IIT may seem an attempt to
overcome Galilei’s paradigm of the scientific method, focusing on the
first-person perspective and recovering the value of subjective experience.
However, the aim of the theory remains to achieve the objectification and
quantification of consciousness as a phenomenon, in order to test and
measure it. Also, once the phenomenological axioms are translated into
postulate of the physical substrate of consciousness, they are discarded
author proofs
(Never)Minding the Gap? 119
from the development of a theory, because they become useless outside
the theoretical domain. Paradoxically, IIT may be interpreted as a
hyperbolic attempt to carry on the Galileian project of mathematizing
nature (Tononi, 2003). It can be interpreted as an instance of what Husserl
called the absurdity of an “exact psychology as an analogue to physics”
(Husserl, 1970: 223), understood as the naturalization of the subjective
and psychic being, through an alleged a-subjective method. I claim that
this entails a petitio principii: attempting to explain a subjective
phenomenon through laws and principles that depends on the
presupposed idealization made from the subjective stance itself.
Not only IIT but also neurophenomenology (Varela, 1996; Gallagher,
2012) puts subjectivity in first place. However, it does so with some
diverging implications that highlight two radically different approaches to
consciousness. As already explained, IIT attempts to objectify the subject,
reducing it to the objective postulates of the physical. Instead,
neurophenomenology avoids using the subject-object opposition as a
starting point: for example, through the method of epoché all conceptual,
epistemological, ontological and scientific assumptions are temporarily
suspended, in virtue of the analysis of the lived experience as such. This
does not mean that there is no room for a rigorous study of consciousness.
On the contrary, neurophenomenology is interested in finding epistemic
bridges between first- and third-person perspectives, while, according to
IIT, the first-person is used only to provide the theoretical ground of the
theory, and it is then translated in the third-person analysis of the
physical. In this regard, neurophenomenology develops a continuous
analysis of our phenomenological structure, before – e.g. the front-loaded
phenomenology for the design of the experiment – and during the
empirical experiment – e.g. the “Explicitation Interview” or the
“Experience Descriptive Sampling” (Bitbol, Petitmengin, 2017). Instead,
the axioms of phenomenology in IIT are simply assumed and then put
aside once translated in postulates. Moreover, neurophenomenology is
based on an embodied approach to the mind, according to the most
recent findings in cognitive science, which claims that a comprehensive
study of consciousness must dwell on the relationship between brain,
body and environment. I claim that IIT, on the contrary, focuses the
research only on the integrated information within the brain, accepting
the “brain in a vat” hypothesis, given that the body and the external
environment are important, but not a necessary condition for a
“solipsistic system” with integrated information and an intrinsic
perspective (Tononi, 2008: 239; Tononi, 2003).
author proofs
120 Chapter Six
4. Concluding remarks
IIT has the merit of a detailed theoretical structure, with many possible
implications in the empirical field. Its ambitions extend across the
philosophical domain, touching upon some of the most debated issues in
the philosophy of mind. However, as has been shown, there may be still
much to do for IIT to correctly deal with the variety of gaps, issues, and
argument seen above, given that the theory seems still blocked by the
boundaries of the objectualist interpretation of consciousness. I would
offer just a few suggestions, with the aim of improving the consistency of
the theory.
1) As exposed above, it would be less controversial to change the
strong identity with experience and information, rather
considering the asymmetric relation of composition (see §
3.1). Indeed, on the one hand, this position could preserve the
ontological relation between information and consciousness,
on the other it avoids any dualism.
2) At this point, the direct result would be acknowledging the
unidirectional relation between consciousness and
integrated information: all conscious systems have
integrated information, but not every system with
integrated information is necessarily conscious (see § 3.2).
16
References
Aaronson, Scott (2014). “Why I am not an integrated information theorist
(or, the unconscious expander).” Shtetl-Optim.,
http://www.scottaaronson.com/blog/?p=1799 [Accessed April 2018].
Barrett, Adam (2015). “A comment on Tononi & Koch (2015)
‘Consciousness: here, there and everywhere?’.” Phil. Trans. R. Soc. B., 5;
371 (1687): 20140198.
Bitbol, Michel, Petitmengin, Claire (2017). “Neurophenomenology and the
micro-phenomenological interview.” In Velmans, M., Schneider, S.
(eds.). The Blackwell Companion to Consciousness. Second edition.
Chichester: Wiley & Sons, pp. 726-739.
Casali, A. et al. (2013). “A theoretically based index of consciousness
independent of sensory processing and behavior.” Science Translational
Medicine, 5 (198).
16
I would like to thank Marcello Ienca for his reading and comments on the draft,
and Matteo Grasso for our conversations about IIT. All errors, ambiguities,
misconceptions are my own.
author proofs
(Never)Minding the Gap? 121
Cerullo, Michael (2015). “The Problem with Phi: A Critique of Integrated
Information Theory.” PLoS Comput Biol, 11 (9).
Chalmers, David (1995). “Facing up to the problem of consciousness.”
Journal of Consciousness Studies, 2 (3): 200-19.
Chalmers, David (2002). “Consciousness and its place in nature.” In
Chalmers, D. (ed.) Philosophy of Mind. Classical and Contemporary
Readings. Oxford/New York: Oxford University Press.
Davidson, Donald (1970). “Mental Events.” In Actions and Events. Oxford:
Clarendon Press.
Dennett, Daniel (1991). Consciousness explained. Boston: Little, Brown
and Co.
Descartes, Renè (1966). Discours De La Methode. Paris: Garnier,
Flammarion.
Edelman, Gerald, Tononi, Giulio (2001). A Universe of Consciousness. How
Matter Becomes Imagination. New York: Basic books.
Galilei, Galileo (1995). Il saggiatore. Lecce: Conte.
Gallagher, Shaun (2012). Phenomenology. Basingstoke: Palgrave
Macmillan.
Gosseries, O., et al (2009). “Consciousness in the Locked-in Syndrome.” In
Laureys, Steven, Tononi, Giulio (eds.) Neurology of Consciousness:
Cognitive Neuroscience and Neuropathology. Cambridge, MA: Academic
Press, pp. 191-203.
Grasso, Matteo (2013). “Integrated Information Theory and the
Metaphysics of Consciousness,” In 5th Online Consciousness Conference,
February 15-30.
https://www.academia.edu/2508982/Integrated_Information_Theory_a
nd_the_Metaphysics_of_Consciousness [Accessed April 2018].
Helias, M., et al. (2011). “Finite post synaptic potentials cause a fast-
neuronal response.” Front Neurosci., 5 (19): 1-16.
Hoel, E. et al. (2016). “Can the macro beat the micro? Integrated
information across spatiotemporal scales.” Neuroscience of
Consciousness, 1: 1-13.
Horgan, Terry (1984). “Jackson on Physical Information and Qualia.”
Philosophical Quarterly, 34: 147-183.
Horgan, John (2015). “Can Integrated Information Theory Explain
Consciousness?.” In Scientific American, 1 December.
http://blogs.scientificamerican.com/cross-check/can-integrated-
information-theory-explain-consciousness/ [Accessed April 2018].
Husserl, Edmund (1970). The crisis of european sciences and
transcendental phenomenology. An introduction to phenomenological
philosophy. Evanston: Northwestern University Press.
Koch, Christof (2012). “Consciousness: Confessions of a Romantic
Reductionist.” Cambridge, MA: MIT Press.
Koch, C., Tononi, G., reply by Searle, J. (2013). “Can a Photodiode Be
Conscious?.” In The New York Review of Books,
http://www.nybooks.com/articles/2013/03/07/can-photodiode-be-
conscious/ [Accessed April 2018].
author proofs
122 Chapter Six
Kripke, Saul (1971). “Identity and necessity” In Munitz, Milton (ed.)
Identity and Individuation. New York University Press, pp. 135-164.
Levine, Joseph (1983). “Materialism and qualia: The explanatory gap.”
Pacific Philosophical Quarterly, 64: 354-361.
Levine, Joseph (1999). “Conceivability, Identity, and the Explanatory Gap.”
In Hameroff, Stuart, Kaszniak, Alfred, and Chalmers, David (eds.)
Towards a Science of Consciousness III: The Third Tucson Discussions and
Debates. Cambridge, MA: MIT Press, pp. 3-12.
Massimini, M. et al (2005). “Breakdown of cortical effective connectivity
during sleep.” Science, 309: 2228–32.
Massimini, M., et al (2010), “Cortical reactivity and effective connectivity
during REM sleep in humans” Cognitive Neuroscience, 1 (3): 176–83.
Mindt, Garrett (2017), “The Problem with the 'Information' in Integrated
Information Theory.” Journal of Consciousness Studies, 24 (7–8): 130–54.
Nagel, Thomas (1974). “What is it like to be a bat?.” Philosophical Review,
8: 435-450.
Northoff, Georg (2014). Minding the Brain - A Guide to Philosophy and
Neuroscience. New York: Palgrave Macmillan.
Oizumi, M., Albantakis, L., and Tononi, G. (2014). “From the
phenomenology to the mechanisms of consciousness: Integrated
information theory 3.0.” PLoS Computational Biology, 10 (5).
Onians, Richard (1954). The Origins of European Thought. About the Body,
the Mind, the Soul, the World, Time and Fate. Cambridge: University
Press.
Place, Ullin (1956). “Is consciousness a brain process?.” Br J Psychol., 47
(1): 44-50.
Rosanova, M., et al (2012). “Recovery of cortical effective connectivity and
recovery of consciousness in vegetative patients.” Brain, 135 (4): 1308--
1320.
Rowlands, Max (2001). The nature of consciousness. Cambridge:
Cambridge University Press.
Searle, John (1998). “How to study consciousness scientifically”,
Philosophical Transactions of the Royal Society B: Biological Sciences, 353
(1377): 1935–1942.
Searle, John (2013). “Can information theory explain consciousness?.”
New York Review of Books.
http://www.nybooks.com/articles/2013/01/10/can-information-theory-
explain-consciousness/ [Accessed April 2018].
Seth, Anil (2016). “The hard problem of consciousness is a distraction
from the real one.” In Aeon Essays. November, 2,
https://aeon.co/essays/the-hard-problem-of-consciousness-is-a-
distraction-from-the-real-one [Accessed April 2018].
Shannon, Claude (1948). “A Mathematical Theory of Communication.”
Bell System Technical Journal, 27: 379–423 & 623–656.
Tononi, Giulio (2003). Galileo e il fotodiodo. Cervello, complessità e
coscienza. Roma-Bari: Laterza.
author proofs
(Never)Minding the Gap? 123
Tononi, Giulio (2005). “Consciousness, information integration, and the
brain.” Progress in Brain Research, 150: 109-26.
Tononi, Giulio (2008). “Consciousness as integrated information: a
provisional manifesto.” Biol Bull., 215 (3): 216-42.
Tononi, Giulio (2012). Phi: A voyage from the brain to the soul. New York:
Random House.
Tononi, Giulio (2017a). “The Integrated Information Theory of
Consciousness: An Outline.” In Velmans, Max, & Schneider, Susan (eds.),
The Blackwell companion to consciousness. Malden, MA: Blackwell Pub,
pp. 243-256.
Tononi, Giulio (2017b). “Integrated Information Theory of Consciousness:
Some Ontological Considerations.” In Velmans, Max, & Schneider, Susan
(eds.), The Blackwell companion to consciousness. Malden, MA: Blackwell
Pub, pp. 621-633.
Tononi, G., et al. (2016). “Integrated information theory: From
consciousness to its physical substrate.” Nature Reviews, Neuroscience,
17 (7): 450-461.
Tononi, Giulio, & Koch, Christof (2015). “Consciousness: here, there and
everywhere?.” Phil. Trans. R. Soc. B, 370.
Tononi, Giulio, Koch, Christof (2016). “A reply to Barrett (2016).” In Phil.
Trans. R. Soc. B.
Tononi, Giulio, Laureys, Steven (2009). “The Neurology of Consciousness:
An Overview.” In Laureys, S., Tononi, G., Neurology of Consciousness:
Cognitive Neuroscience and Neuropathology. Elsevier, pp. 375-412.
Varela, Francisco (1996). “Neurophenomenology: A methodological
remedy for the hard problem.” Journal of Consciousness Studies, 3 (4):
330-349.
Whitehead, Alfred (2011). Science and Modern World. Cambridge:
Cambridge University Press.