PreprintPDF Available

There is something that it is like to be me

Authors:
Preprints and early-stage research may not have been peer reviewed yet.

Abstract

A non-mathematical introduction to an inclusive approach to mathematics of consciousness, based on network theory, dynamical systems, and the reverse engineering of the behaviour of open complex systems and their responses.
There is something that it is like to be me
Peter Grindrod CBE
Oxford April 2022
Introduction
There is something that it is like to be me. I am assuming that you feel this too, about yourself. In
direct response to external events (incoming stimuli perceived by our sensory organs), or to our
internal voice (our present train of thought), or to both at the same time, we can experience a wide
range of internal sensations, repeatedly and consistently.
We argue that these inner sensations have a hugely practicable role. They are not an independent or
a marginal extra. They can precondition both our immediate subconscious and conscious thinking.
They reduce the range of the immediate possibilities that need to be considered or managed, without
us knowing, and thus they allow our minds to zoom-in and focus on the most essential matters at
hand. Under those influences, responding within one of a number of internal dynamical modes that
feature in our brains information processing operations, these may easily reduce the cognitive load
for our decision-making. So, we each make just the sort of decisions that we always make in various
circumstances, consistently but not necessarily rationally [Arielly]. We do so regardless of any
predefined or objective optimality: that is the just sort of person that we are. It would take some hard
conscious effort to recognise ourselves doing so and to pause, back-up, and break our usual chain of
response and consequent action [Kahneman].
Inner sensations obviously differ in their scale: they can be large or small. They can be almost
overwhelming, like love, lust, fear, anxiety, ecstasy, pain, or embarrassment; or they can be small
experiences, like the feelings of seeing the blueness of blue, or of the stroking of skin. They are often
called qualia”, and are accessible phenomenal components of our mental lives. We might posit that
these feelings are hierarchical, with some being components of others. That is not a new idea. Yet, as
we shall see, it accords well with our attempts to reverse engineer simulations of the human brain and
observe its dynamical response to stimuli. By “dynamical response” we mean physical patterns of
neural activity that exist over both time and across the brains. They are not stationary and cannot be
classified in a single snapshot. We can observe these common response modes of dynamical
behaviour in a cortex-like cortex system via multiple experiments where the system is subject to all
kinds of stimuli. The discovery of these distinct internal dynamical modes, which are prime candidates
for internal sensations, requires a kind of in silico reverse engineering.
Materialism is the idea that consciousness is solely dependent upon the information processing within
the brain This research is novel and has profound implications for materialism. We will go further, we
argue here that conscious phenomena are active components within cognition, and play an existential
role.
Of course, the something that it is like to be me” is internal to me and is thus private. For the most
part, even with the most invasive high-resolution neuroimaging scanners we cannot see what is going
on inside our brains at a cellular level. That in vivo reverse engineering project is still beyond us.
However, we do know quite a lot about the brain as an information processor and a decision-making
organ. What we know indicates it is usually poor at logic and arithmetic (which are arduous and time
consuming) and instead supports many fallacies in reasoning due to a consistent reliance on heuristics.
In fact, the things that our brains can do not do well, or cannot do at all, show us how distinct the
brains’ workings are from any weak AI application: all present AI is weak, not strong, in the sense of
[Searle], though these days practitioners try hard to avoid the adjectives. Weak AI emulates the
decision-making and recognition powers of the human brain. Moreover, the purloining of vocabulary
(intelligence, neural networks, deep learning) is very unhelpful and is frankly misleading. It is hype.
The resulting machine and deep learning methods of opaque supervised and unsupervised
discrimination, interpolation and inference belong to the field computational statistics, and they are
often impressive, being driven by modern computing power. In the future they will benefit from a
more statistical framework, rather than a suck and see, what works, paradigm. The only present
path towards strong AI, now often termed advanced general intelligenceby the information
scientists, is via whole-brain system simulation, and this would necessarily involve the construction of
information processing platforms that have internal sensations. Such a programme is what we have
in mind here.
Our inner sensations and feelings also radically reduce the data requirement for decisions: we exhibit
fast thinking[Kahneman] and we are never really data hungry in the way that decision-making AI
algorithms seeking to emulate human behaviour are. Have we ever even experienced what a lack of
(input) data feels like? Instead, we make a fast early decision, preconditioned by our prior experience
and present mental sensations, and then we refine it in the light of incoming evidence.
We will address a number of key issues regarding our brains and effective and efficient information
processors and the roles that some conscious phenomena play in enabling that performance. Of
course, any improvement on our understanding of our environment (what is going on?) or our
anticipation (what will happen next), while not incurring any huge costs of cognitive processing, will
be to our evolutionary advantage. We all do these things. We all harness the power of our internal
conscious sensations in becoming effective and more efficient at complex cognitive tasks. Any
prototypical humans that could not do so, or that could only perform at a lower standard, were
relatively at risk, were relatively unadaptable, were relatively unsuccessful, and even may not have
developed their interpersonal skills or personal relationships well. In short, they did not manage
propagate that relative ineffectiveness and inefficiency as well as others propagated their more
superior abilities.
We will take a physicalist approach that is based on what the cells in the human brain, the neurons,
are really doing when they pass signals to one another in a certain manner, with certain dynamics. It
will turn out that the existence of an inner phenomenal life, consisting of the internal dynamical modes
of behaviour in the brain, is a very natural state of affairs. It is a direct consequence of an evolutionary
advantage gained from the brains particular neuron-to-neuron architecture, the excitable and
refractory dynamics of single neurons, and the nature of time-delayed transmission of sharp spikes of
electrical excitation being passed from neuron to neuron.
We know a lot about these things, and when we ask, Where does love come from? Where does any
internal sensation come from?we can now give a rather clear answer. That answer will be based on
a well-founded materialism; saying that these inner sensations are merely response properties of the
brain as a whole system, and that they confer a straightforward evolutionary advantage; that
consciousness goes hand in hand with the material mental processing within the brain; and that some
aspects of consciousness are so advantageous that their existence will cause the brain evolve so as to
exhibit them ever-more distinctively. Thus, the physical brain will have evolved to become more
conscious, and perhaps in some very specific ways. Indeed, it is likely to evolve further.
It is sometimes argued (for example, in Chalmers excellent introductory book [Chalmers]) that the
ability of the human brain to perform the many required information processing tasks, such as making
decisions, learning, developing and retrieving memories, sustaining language abilities, and so on, is
indeed truly astonishing and impressive; yet these are all still only plausible achievements of such a
complex information processing system, at least in principle, even when we do not fully (if at all)
understand the practical methods of implementation within the information processing pipeline. Yet,
the argument continues, the conscious inner life of sensations that we all experience is truly surprising
and baffling. Our response to that approach is to argue back that these two things are not separable
at all. The very existence of our inner sensations is actually an enabler of our astounding cognitive and
fast thinking abilities; their existence decreases the cognitive load and therefore is rewarded by the
evolutionary advantage that they help create for such smarter beings.
Continuing further with this, we should properly think of our inner sensations as the manifestation of
a preconditioning mechanism, without which our thinking and responses to any external stimuli would
be far more ponderous and indeed far more data hungry, in considering far too wide a solution space
for almost any problem at hand. Thus, our internal sensations are not a surprising or optional extra,
or a bonus, they are an integral cog in the engine of our efficient and fast information processing,
recognition, and decision-making.
Put in this way it seems clear that we should ask not why any conscious experiences exists, but more
specifically, we should seek evidence that validates their role as a preconditioner for a brain which is
seeking to answer very immediate “live” questions, such as “What is going on? or “What might
happen next?”. Without such a preconditioning mechanism, the neural cognitive system would re-
start almost every moment from the same massive array of possibilities. The lack of any such
conscious experiences is thus not a good option and indeed the brain must have evolved (physically
and functionally) so as to generate and support a sensational inner life as a key part of its astonishing
cognitive information processing abilities. Consciousness is thus the key to efficient cognition. The
latter is impossible without the former; and the brain has evolved to enable both together.
Once you consider this viewpoint we gain some answers to many basic philosophical questions, made
originally while perhaps assuming that consciousness plays no essential role within cognition. These
include, Why do we have conscious experience that is full of these inner sensations, supporting our
perceptions and thus our expectations about the world?”, and, “Must every brain be able to support
such sensations?. One simply cannot ask what the added value is, what is the added purpose of the
brain’s conscious experiences. Instead, we see clearly now that they are existential to the whole
effectiveness of the cognitive performance of the brain. There is no world where one has the latter
effectiveness without the former conscious experiences.
In one important respect it is perhaps very misleading to talk of consciousness as if it were common
to all and is a single, well-defined, aspect of experience. That is, we should say that consciousness is
not likely to be a binary property, where you are conscious (with inner conscious experiences) or not
conscious. But rather that any conscious ability of an organism is a journey, both in the longer-term,
on evolutionary time scales, and in the shorter-term, within a single lifetime.
Of course, on evolutionary time scales this raises another question about whether novel aspects of
consciousness might be developed, if it confers certain advantages. Humans might develop more
heightened sensations. Perhaps our own 21st Century experiences are far different that those of
hunter gatherers, dominant from 2M to 10,000 years ago: we are now exposed to far more
information and stimuli, via modern communications and digital media platforms, our brains can
rarely rest until we are sleeping, and even then... How is the 24/7 fog horn of the data deluge of
modern digital life changing our mental health and thus conferring new advantages to those with
certain specific and enhanced developed conscious experiences?
Within a single lifetime it is very reasonable to think that our own individual consciousness ebbs and
flows with us, and the development of conscious abilities is far from a one-way street. As a
consequence of our particular life experiences, our needs and our regular practices, we might develop
aspects of our own conscious experience, of our brains own “inner life. Or with a lack of need or
usefulness that inner life may decline and we may reduce our own conscious experiences. Perhaps
our inner sensations gain or lose their resolution, and become more or less nuanced.
Consciousness is sometimes presented as a huge problem for materialism. How is it possible that a
bunch of cells that are each devoid of consciousness to get together and cause consciousness?”. This
argument misses the most fundamental point of systems behaviour. Reductionism cannot work and
is not required to be at work here. There is no need for systems components (cells) to be conscious as
individuals. Instead, the conscious phenomena, the inner sensations at their simplest, are formed by
the dynamical behaviour of the whole system. Demanding reductionism, and complaining about its
absence, is a red herring.
A further response to the problem of consciousness is run away from it all together and say that
consciousness is an illusion. But our conscious experiences seem valid and repeatable to each of us,
and we should investigate their possible causes and consequences rather than denying their reality.
Remember that light, sound, gravity and electrostatics were all highly mysterious before their
explanations in the 16th to 18th centuries, including the subsequent development of potential theory
(and Poisson’s equation) explaining some hidden mechanisms: hidden that is, from our eyes and our
direct daily experience. The energy-mass equivalence within special relativity was opaque prior to
its mathematical establishment by Einstein in 1905, and its subsequent validation (for example, in
1932 by Cockcroft and Walton). Mystery is an essential challenge to science.
A popular current approach to consciousness is based on the scientific phenomenon of emergence.
This concept asserts that consciousness is an emergent property of the complex structure of the
brain’s matter and its dynamics.
Emergence has been a very popular area for systems research for the past forty years or so within
material science, condensed matter, social science, and many other applications. It usually pertains to
so called “complex systems”. At their simplest, complex systems are a collection of individual units,
with each unit being a dynamical system which governs its own internal state (no hard need for any
stochasticity). All of the units have similar properties, and all of their state dynamics are coupled to
together in a network of unit-to-unit interactions, so that they may interfere with each other. The
density and strength of these interactions is usually treated as a macroscopic system parameter which
may be varied continuously (at least in principle), as environmental or constitutive conditions change,
between zero (meaning that there are no unit-to unit interactions present, with all units being isolated
from one another), up to 1 (meaning that all possible pairwise unit-to unit interactions are present).
The central idea of emergence is the properties of than system can change at critical (bifurcation)
points of the underlying systems (as some systems parameters vary) and the resulting collaborative
behaviours cannot be predicted by an examination of the component units and the system couplings.
These switches are often termed “phase changes” since fundamental macroscopic properties of the
system switch abruptly as underlying systems parameters vary slowly and continuously. For example,
those parameters might be controlling average connectivity and strength or the interactions between
the basic units. Emergence usually results in the system exhibiting a lack of homogeneity, which is
often termedpattern formation”. For example, this occurs within applications to developmental
biology, where a growing blob of replicating cells must differentiate in order to develop the whole as
an organism with highly distinguished inner structures.
Let us be clear that although the phenomenon of emergence shares some of the properties with the
distinct internal dynamical modes concept, introduced above, which we shall elucidate further, it
also pulls-in some rather less fortunate baggage: the systems dynamical stability and the response to
the presence of small, noisy, perturbations in producing heterogeneous patterns via symmetry
breaking and so on, as the homogeneous pattern becomes unstable.
We will argue against these aspects of emergence processes underpinning consciousness via the
following simple thought experiment. We can bring our previous familiar sensations to mind. If you
imagine a collage of images, music, past moments and events, one can feel embarrassed or
melancholy or anxious. You feel the sensation just as if those imaginings were actual incoming sensory
signals. One does have to wait until there is a dynamical instability, hanging around while a phase
change emerges, or to wait for the right type of stochastic noise to perturb your previous mood.
The central issue for us about emergence it that it usually refers to phase changes in the observed
heterogeneous state for closed systems, where that the dynamics involved are all internal to the
system. Such a complex system is thus behaving within an insulated bubble (some scientists refer to
it as being wrapped in a Markov blanket [Friston]). The brain however is demonstrably an open system.
It has inputs and outputs. It is constantly being forced by various incoming stimuli: it constantly
receives sensory information from the external world and from elsewhere within the body. Left alone
and unstimulated it my free-run and approach a stationary internally-driven pattern, whence
emergence considerations may be useful, but this is never the normal situation. Our focus is on how
the brain makes sense of these inputs; how it responds to all sorts of distinct sensory inputs. It will
turn out that it is this dynamical, real-time, response to dynamical stimulations that are in the form
on one of a number of distinct internal dynamical modes becoming prevalent, each mode being a
distinctive pattern of activity that exists both over time and across the brain. Once any of these modes
sets in, the immediately incoming stimuli can only result in a response (as a decision or an action) that
is preconditioned by that present mode.
Perhaps many commentators just treat “emergence as a convenient carpet under which to brush
things. But our dynamical systems point of view offers these other, much better, alternatives.
The human brain as a complex system: what lies within?
How might we build, and then reverse engineer, an information processing system that contains all of
the features and all of the abilities of the human cortex?
This aspiration would include the ability for fasting thinking information processing (cognition); a
capacity to make rapid decisions based on little or incomplete information, possibly biased
(preconditioned) by its present modes of behaviour; and some internal dynamical properties
supporting such features as subjective qualia, feelings, and phenomenal sensations, which are all part
of the hard problem of consciousness. As physicalists, we believe that the latter are simply artefacts
of the whole system when it is stimulated; and that they are mere consequences of an evolutionary
development that has enabled both efficient and effective information processing. They are
corollaries of the basic system components: (i) the dynamics of excitable and refractory neurons; (ii)
the architecture, consisting of a loose network-of-(denser)networks; and (iii) the time-delays inherent
in transmitting signals across that architecture (iv) the stimulation form sensory and internal-body
sources from outside of the brain.
The conscious part of the human brain is the cortex, it is a layered entity that is crinkled-up and
wrapped over the more fundamental limbic (reptilian) part of the brain, which itself unconsciously
controls and automates various motor bodily functions (such as operating properties like heartrate,
temperature control, and so on) often via chemical mechanisms. The human cortex contains around
10B individual neurons. They are arranged into approximately 1M neural columns, which
mathematical network theory scientists call modules, with each module containing 10,000 rather
densely connected neurons. The modules are arranged in a two-dimensional grid over the surface of
the cortex. If the human cortex was ironed out flat it would be like a carpet with the modules forming
the carpet pile. Some near-neighbouring modules are directly connected, by which we men that at
least one neuron on one module is directly connected to at least one neuron within the other module.
All neuron-to-neuron transmission of firing spikes (signals) incurs a variable (real valued) time delay.
The mathematical study of networks, which has developed phenomenally over the past two decades
has been driven and enabled by a number of applications each of which has been subject to data
deluge (bioinformatics, social networking, and so on). We consider a set of vertices (nodes) that are
representative of individual units which often have a dynamical state, and edges (connections,
directed or two-way) that represent the unit-to unit interactions. Networks in vivo are rarely uniform
and a huge amount of attention goes into showing how observed networks, possibly very large ones,
are really a loosely connected set of more densely connected sub-networks. The dense sub graphs are
usually called modules. So, the whole is a network of networks: an outer network that is relatively
sparse and connects up the modules, the inner dense networks.
To summarise, the cortex is a network-of-networks of neurons (units). Each module contains 10,000
or so dynamical units (individual neurons). The neuron-to-neuron connections are directed in the
sense that a spike of electrical depolarisation leaves the some (the cell centre) of one neuron moving
out along its axon, reaches multiple endpoints, jumps (or not) across the corresponding synapses onto
the dendrites of adjacent neurons, and then travels up those dendrites into those adjacent neurons
somas (central cell bodies). The modules are more sparsely linked together by directed connections
from a neuron in one module to a neuron within another adjacent module.
In recent research very large scale (VLS) neuron-to-neuron simulations were completed. This requires
the use of a bespoke multi-core computational facility (in our case SpiNNaker [Furber, 2014]) in order
to deal with the very large network of neurons embedded within the whole the multi-module network-
of-networks architecture.
The VLS neuron-to-neuron simulations revealed that in isolation each of the individual modules
actually behaves like a k-dimensional clock [Grindrod and Lee, 2017]. A k-dimensional clock is a
dynamical system with k separate phases, with each phase variable normalised to be identified
modulo 2p and winding around, and thus increasing steadily at its own rate when left unperturbed. It
thus has k degrees of freedom. It is a k-dimensional dynamical system, and simulations show that k>>1
is proportional to the logarithm of the number of neurons within a single module neural column
[Grindrod and Lee]. One can monitor the output from an isolated module within a VLS simulation and
use a state-space embedding methods (a form of signal processing) or a similar method to estimate k
for a wide variety of module neuron population sizes, while maintaining a typical degree distribution
for intra-module neuron-to neuron connections.
This logarithm-scaling is itself highly desirable as the human cortex has only a limited total volume and
total energy: our heads cannot grow any larger. It also explains why modules are observed to be so
uniform in size. One simply never meets people who have brains where there is a wide range of
module (neural column) sizes, as measured by the module’s population of densely connected neurons.
Rather than double the size of a single module, the logarithmic growth of the degrees of freedom (k)
implies that one would be better to have created two separate modules of the same regular size.
There is indeed an optimal value for the number of neurons per module which is an immutable fact;
and that is exactly what evolution has discovered. Hence the network-of-networks architectures is
produced so as to maximise the total number of degrees of freedom available to the whole system. It
is important to note that this conclusion follows from a pencil and paper conceptual analysis rather
than being inferred from observations.
Notice that we are not saying that any individual neurons behave like clocks or that they are oscillatory
in any particular way. We are saying that a module (a neural column) made up of a set of 10,000
densely connected neurons behaves in total as a k-dimensional clock, most likely reflecting multiple
and non-intersecting cycles within its connections.
Of course, the individual modules are not ever operating in isolation, they talk to each other through
the outer network (across the network-of-networks); with the whole being a massive, complex, and
open system.
Let us now begin to imagine how this whole complex system might be assembled.
One very useful way to imagine (mathematicians say to model”) the outer network of modules is via
a range dependent directed graph, which is defined as follows. First, we lay out the modules, each of
which is a k-dimensional clock, on a two-dimensional grid; just like the cortex being ironed out flat.
Each node/module in this grid is itself a clock with k phases and k natural winding rates. Perhaps no
two modules are exactly the same (in terms of their intrinsic winding rates which are real positive
numbers). Next we connect up the module/clocks by a range-dependent graph that inserts a directed
connection from the module/clock at position (i,j) to the module/clock at position (i’,j) with a
probability that is a decreasing function of the range, which is defined to be the distance (i-i’)2+ (j-j’)2.
This last is the Euclidean distance, but we could equally use the Manhattan distance, |i-i’|+ |j-j’|. The
point is that pairs of module/clocks that are further and further separated (at longer and longer
ranges) are less and less likely to be directly connected. That is why this type of network model is
called range-dependent. Each of these connections is one-way of course, and they are emplaced
independently of one another; but there is nothing to stop a pair of module/clocks being connected
both ways. Range-dependent graphs were introduced twenty years or so ago with other applications
in mind [Grindrod, 2002]. They enjoy a very useful small world property [Watts and Strogatz, 1998]:
with a high degree of local clustering (triangularisation: if A is directly connected to B, and B is directly
connected to C, then A is likely to be directly connected to C) between near neighbours, while the
whole outer network has a relatively small walk-diameter: any selected pair of module/clocks can be
connected up by walk along a relatively small number of directed edges, one after the other. That last
property is merely the consequence of having at least some rare long-range connections: it is at the
heart of the well-known “six degrees of freedom” idea.
Each directed connection will carry signals from a source module/clock to a receiving module/clock.
Such signals are in the form of spikes of electrical depolarisation that leave the centre of some neuron
within the source module/clock, travel outwards along its axon, jump across a synapse, and then travel
inwards along a dendrite of a corresponding neuron belonging to the receiving module/clock until it
arrives at the cell centre. This transmission takes some time, so there is a real valued time-delay for
the signal to leave the cell centre of the neuron in the source module/clock and to arrive at the cell
centre of the neuron in the receiving module/clock.
Finally, we need to imagine some rules for the initiation of a spike to be sent by each source
module/clock out along a directed connection, and for the dynamical consequence of a spike arriving
at the receiving module/clock. These are both properties of an individual directed connection. The
simplest condition would be to assume that a spike-signal is instigated along an edge whenever the k-
phases of the source module/clock satisfy a certain equation. For example, when some linear
combination of the k-phases is congruent to a given pre-set constant modulo 2p. When the signal
travelling along that edge arrives at the receiving module/clock it administers a kind of instantaneous
shock to the whole module/clock. The simplest rule to allow for this is as if a very strong cardiac
defibrillator is being applied, but hitting and resetting the k-phase dimensions. In that case, as soon as
a signal-spike arrives along a specific edge, the k-phases of the receiving module/clock must be
instantaneously reset to some given pre-set phase values.
In the literature such a shock to a clock is called a phase resetting map (PRM) [Glass and Mackey,
2020]. PRMs can of course be more sophisticated than the above rather heavy handed”, and
simplistic, suggestion, with the k-phase post-reset being more nuanced and dependent upon the
immediate pre-reset k-phases [Grindrod and Patel]. But the above, most simple, PRM is suitable for
our purposes. We do not need to be experts in high dimensional PRMs at this stage; and in fact, rather
little is actually known about PRMs being applied to k-dimensional clocks (when k>>1). The situation
is certainly much more sophisticated than the situation where PRMs are applied to one-dimension
(k=1, normal) clocks, for which there is over forty years of research available, and essentially there are
only two possible topological types of PRM [Glass and Mackey].
The result of this imaginary effort, this thought experiment, is a system containing a large number of
k-dimensional clocks, each winding at the own rates, and occasionally sending out shocks along
various directed edges within a range dependent outer network. As these signals arrive, they apply a
specified PRM to the phases of the receiving clock. Thus, all of the clocks are constantly being phase
reset by one another, while just winding on in between times. This whole is an information processing
platform. Recall that each module/clock corresponds physiologically to a neural column representing
10,000 neurons, so in all there will be about 1M module clocks within the range dependent outer
network.
In fact, as you will have guessed, this whole construction is not just an imaginative concept and it is
reactively straightforward to code-up this complex system and let it run; at least in order to consider
10,000 to 100,000 clocks [Grindrod and Brennan, 2022]. Yet to simulate 1M clocks would still require
a bigger beast of a computer. Even so, that computational task is really trivial compared with VLS
neutron-to-neuron full simulations which are rather both expensive and rare [Grindrod and Lester,
2021] and require a very specialised multicore computing facilities [Furber, 2014]. However, let us
stay up at the conceptual level for now.
Given this complex system, we can force it, just as the cortex is forced by recurrent stimuli being input
from elsewhere: from sensory organs responding to external experiences or other part s of the body.
This is easy to include. We select an input clock and send in phase resets of a certain type whenever
we wish. Of course, the consequences of forcing a single clock within the grid are soon propagated
out across the entire range dependent network of clocks, this being a consequence of the relatively
small walk-diameter property of the small world network.
The VLS neuron-to-neuron simulations have also revealed the inner dynamics of the whole system in
response to various external patterns of stimulation [Grindrod and Lester 2021]. The system responds
consistently in each case by exhibiting one of a number of distinct dynamical modes. These are internal
patterns of firing behaviour and exist in time and across part of the cortex. They are arranged
hierarchically and once in process will bias the immediately following information processing.
The exactly same situation will be true for our range dependent network of k-dimensional clocks
system. Given a very wide range of stimuli the system responds by developing into one of a discrete
number of dynamical modes. The modes are characteristic patterns of activity that exists across the
systems and over time. These dynamical regimes precondition the system in its repones to any further
stimuli, a bit like setting its expectations, and hence they reduce the immediate range of possible
behaviours (inferences, decisions and consequences). They are internal, subjective, dynamical modes,
which can also be arranged hierarchically, and their existence is a non-negotiable consequence of the
network-of-networks architecture, the k-dimensional clock dynamics, the range dependent and time-
delayed transmissions, and the PRMs. All such systems will exhibit these modes. In particular all such
cortex-like systems will do so, including any cortex of a human brain. It is the experience of being
locked into these distinct dynamical regimes that provides the sensational phenomenological inner
life.
These modes are primary candidates for qualia. Yet they cannot be known without the model and the
analysis of vast numbers of simulations with the resulting dynamical systems subject to many distinct
types of forcing, and then contrasting the resultant patterns, modulo suitable time shifts. This reverse
engineering is possible only with simulations in silico. Meanwhile, to any non-mathematician they
may still appear to be very mysterious.
In taking a materialist stance one might think we are simply aligning with Dennett [Dennett, 1991].
However, Dennett seems to argue that conscious phenomena are a consequence, or by-product, of
the effort of the computation/cognitive machinery; a product of the multiple, layered computations
running on the hardware of the brain. Yet we are saying that such sensations have a key role to play
in that fast-thinking cognitive pipeline. They are not consequences but they are active and existential
participants as preconditioners, or we might say as a self-created and fluid set of priors, in a Bayesian
sense. This is attractive to mathematical scientists (we admit), computational scientists, and AI
engineers because they all know how data-hungry and computationally demanding doing even the
simplest live discrimination tasks is (here “live” means in real time). Of course, for many working within
AI emulation the computational costs are little concern, and indeed they are no longer a constraint.
But for a human brain this resourcing is capped. The only way around this constraint is to precondition
the cognitive engine so that the brain is half expecting or anticipating the answers to questions such
as “What is happening?and What will happen next”? by greatly reducing the plausible decision set.
In doing so the systems is always anticipating what happens next before finely resolving it. The
“anticipation/expectation” idea (to reduce cognitive load) is not new: it was anticipated by Friston
[Friston 2012] and has since been developed, as part of a free energy principled concept, into a kind
of forward feed-back mechanism, where the cognitive brain creates those forward expectations that
are most likely to become fulfilled.
Prior to making and reverse engineering these VLS simulations we had foreseen the characterisation
of inner our sensations as latent variables of the cortex’s cognitive dynamical system [Grindrod, 2018].
That idea connected-up a number of diverse lines of research and intellectual approaches (some of
which we have elucidated here). It encouraged us to go further, and now, after our simulations and
reverse engineering programme, we are positing an even more active role for the qualia
(corresponding to transient inner dynamical modes) in preconditioning the immediate information
processing tasks. Of course, the network of k-dimensional clocks, all phase resetting one another,
concept offers us a way out of expensive VLS simulations and blatantly suggests how we might develop
an efficient, non-binary, information process which itself possesses internal dynamical modes
(sensations). Would that be a minimally conscious computer?
References
[Ariely, 2010] Ariely, Dan, (2010), Predictably Irrational: The Hidden Forces That Shape Our Decisions.
New York: Harper Perennial.
[Chalmers] 1996 Chalmers, David J. (1996). The Conscious Mind: In Search of a Fundamental Theory.
Oxford University Press.
[Dennett] Dennett, Daniel (1991), Allen Lane (ed.), Consciousness Explained, The Penguin Press,
[Friston, 2013] Friston, Karl (2013), Life as we know it. J. R. Soc. Interface 10, 20130475.
[Friston, 2012] Friston, Karl (2012), The history of the future of the Bayesian brain.” NeuroImage vol.
62,2 (2012): 1230-3. doi:10.1016/j.neuroimage.
[Furber, 2014] Furber, S.B., Galluppi, F., Temple, S. and Plana, L. A. (2014), The SpiNNaker Project,
Proceedings of the IEEE. 102 (5): 652665.
[Glass and Mackey, 2020] Glass, Leon and Mackey, Michael C. (2020), From Clocks to Chaos: The
Rhythms of Life, Princeton: Princeton University Press, 2020.
[Grindrod, 2018] Peter Grindrod; On human consciousness: A mathematical perspective. Network
Neuroscience 2018; 2 (1): 2340. doi: https://doi.org/10.1162/NETN_a_00030
[Grindrod 2002] Grindrod, Peter (2002), Range-dependent random graphs and their application to
modeling large small-world Proteome datasets, Phys. Rev. E 66.
[Grindrod and Brennan, 2022] Grindrod, Peter, Brennan, Martin (2022), Cortex-Like Systems via
Range-Dependent Networks of Phase-Resetting k-Dimensional Clocks, Preprint ResearchGate,
https://www.researchgate.net/publication/359202653_Cortex-Like_Systems_via_Range-
Dependent_Networks_of_Phase-Resetting_k-Dimensional_Clocks
[Grindrod and Lee, 2017] Grindrod, Peter, and Lee, Tamsin E. (2017), On strongly connected networks
with excitable-refractory dynamics and delayed coupling, Roy. Soc. Open Sci. 2017 Apr; 4(4): 160912
doi: 10.1098/rsos.160912.
[Grindrod and Lester, 2021] Grindrod, Peter, and Lester, Chris (2021), Cortex-like complex systems:
what occurs within?, Frontiers in Applied Mathematics and Statistics, 7, p 51.
[Grindrod and Patel, 2015] Grindrod, Peter and Patel, Ebrahim L. (2015), Phase locking to the n-torus,
IMA Journal of Applied Mathematics 10/2015.
[Kahneman, 2011] Kahneman, Daniel (2011), Thinking, Fast and Slow, Farrar, Straus and Giroux.
[Searle, 1980] Searle, John (1980), "Minds, Brains and Programs" Behavioral and Brain Sciences, 3 (3):
417457.
[Watts and Stogatz, 1998] Watts, Duncan J., and Strogatz, Steven H. (1998), Collective dynamics of
'small-world' networks, Nature. 393 (6684): 440442.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
We consider cortex-like complex systems in the form of strongly connected, directed networks-of-networks. In such a network, there are spiking dynamics at each of the nodes (modelling neurones), together with non-trivial time-lags associated with each of the directed edges (modelling synapses). The connections of the outer network are sparse, while the many inner networks, called modules, are dense. These systems may process various incoming stimulations by producing whole-system dynamical responses. We specifically discuss a generic class of systems with up to 10 billion nodes simulating the human cerebral cortex. It has recently been argued that such a system’s responses to a wide range of stimulations may be classified into a number of latent, internal dynamical modes. The modes might be interpreted as focussing and biasing the system’s short-term dynamical system responses to any further stimuli. In this work, we illustrate how latent modes may be shown to be both present and significant within very large-scale simulations for a wide and appropriate class of complex systems. We argue that they may explain the inner experience of the human brain.
Book
* Why do our headaches persist after taking a one-cent aspirin but disappear when we take a 50-cent aspirin? * Why does recalling the Ten Commandments reduce our tendency to lie, even when we couldn't possibly be caught? * Why do we splurge on a lavish meal but cut coupons to save 25 cents on a can of soup? * Why do we go back for second helpings at the unlimited buffet, even when our stomachs are already full? * And how did we ever start spending $4.15 on a cup of coffee when, just a few years ago, we used to pay less than a dollar? When it comes to making decisions in our lives, we think we're in control. We think we're making smart, rational choices. But are we? In a series of illuminating, often surprising experiments, MIT behavioral economist Dan Ariely refutes the common assumption that we behave in fundamentally rational ways. Blending everyday experience with groundbreaking research, Ariely explains how expectations, emotions, social norms, and other invisible, seemingly illogical forces skew our reasoning abilities. Not only do we make astonishingly simple mistakes every day, but we make the same types of mistakes, Ariely discovers. We consistently overpay, underestimate, and procrastinate. We fail to understand the profound effects of our emotions on what we want, and we overvalue what we already own. Yet these misguided behaviors are neither random nor senseless. They're systematic and predictable--making us predictably irrational. From drinking coffee to losing weight, from buying a car to choosing a romantic partner, Ariely explains how to break through these systematic patterns of thought to make better decisions. Predictably Irrational will change the way we interact with the world--one small decision at a time. (PsycINFO Database Record (c) 2012 APA, all rights reserved)(cover)
Range-dependent random graphs and their application to modeling large small-world Proteome datasets
  • Mackey
  • Leon Glass
  • Michael C Mackey
and Mackey, 2020] Glass, Leon and Mackey, Michael C. (2020), From Clocks to Chaos: The Rhythms of Life, Princeton: Princeton University Press, 2020. [Grindrod, 2018] Peter Grindrod; On human consciousness: A mathematical perspective. Network Neuroscience 2018; 2 (1): 23-40. doi: https://doi.org/10.1162/NETN_a_00030 [Grindrod 2002] Grindrod, Peter (2002), Range-dependent random graphs and their application to modeling large small-world Proteome datasets, Phys. Rev. E 66.
Thinking, Fast and Slow, Farrar, Straus and Giroux
  • P Grindrod
  • P Patel
  • E L Kahneman
and Patel, 2015] P. Grindrod, P. and Patel, E.L. (2015), Phase locking to the n-torus, IMA Journal of Applied Mathematics 10/2015. [Kahneman, 2011] Kahneman, Daniel (2011), Thinking, Fast and Slow, Farrar, Straus and Giroux. [Serle, 1980] Searle, John (1980), "Minds, Brains and Programs" Behavioral and Brain Sciences, 3 (3): 417-457.