ArticlePDF Available

Abstract and Figures

In our age of digitalization, humans feel more and more dominated by machines. With brain-machine-interfaces (BCIs), technology now seems on its way to invade the inner territory of human privacy, our imagination, fantasy, hidden thoughts and feelings. On the other hand, machines are man-made in all their aspects, an embodiment of ourselves. This essay provides a critical assessment of the possibilities and limits of current BCI technology, and sheds light on the concepts underlying BCI technology. It reveals an intimate relationship between the concepts of man and the concepts of machine that can be shaped to escape technological determinism.
Content may be subject to copyright.
On the Intimate Relationship
between Man and Machine
MATTHIAS DELIANO (GERMANY)
Die Glorifizierung des Cyborgs als
neues Lustweltreich des Menschen
verkennt aber, welche Geduld, Com-
pliance und sogar Schmerzbereits-
chaft schon heute der Einsatz tech-
nischer Mittel in Therapie und Reha-
bilitation erfordern.
Detlef B. Linke
As tools, machines are functional ex-
tensions of our body augmenting and ex-
panding our interaction with the world. Be-
yond that, western culture has developed a
more intimate, metaphorical relationship
between man and machine over the centu-
ries. This development started in the ana-
tomical theaters of the renaissance in the
16th century, when the human body was de-
tached from the person, and turned into an
object on the dissecting table (Kathan,
2003). Devoid of empathic relationships
and personal interests, the body became
physically manipulable, could be separated
into parts, and ascribed with dedicated,
non-personal functions. This permitted to
view the body as a machine, and, vice versa,
to employ the mechanistic body as a blue-
print for the development of new machines.
With the transfer of the body from a per-
sonal domain into the realm of technology,
ÝÏÈÑÒÅÌÎËÎÃÈß & ÔÈËÎÑÎÔÈß ÍÀÓÊÈ ·2013 ·Ò. XXXVII ·¹3
141
technical innovations not only refine and create new interventions into the
body, which marks the success story of modern western medicine. Even
more, technological innovation since then has been progressively and radi-
cally transforming the way we conceive and ultimately experience our body.
As part of the body, the brain has been steadily re-conceptualized as
machine in the light of current technology, as well (Kathan, 2003). With
mechanical engineering being the dominant technology of the 17th cen-
tury, the brain at that time was conceived as a hydraulic/pneumatic ma-
chine. With the rise of electromagnetism and the demonstration that the
brain is electrically excitable, it became an electrical organ. Later on, the
network structure of the brain revealed by the 19th century neuroanato-
mists provided an analogy to telegraph and telephone nets, and thus a
strong link to communication technology, which lead to a mutually stimu-
lating and fruitful parallel development of brain and computer science.
Thus, John von Neumann’s theories of computation, which are the basis of
modern computers, were strongly inspired by brain science (von Neumann,
1958). In turn, computational theory reentered brain science with the cog-
nitive turn in the 1970s, and there has a prevailing influence, since then.
The strength of computational theories lies in the fact that they provide
mechanisms of algorithmic problem solving that can be abstracted from
their physical implementation. This makes it possible to describe mental
processes in terms of computational functions realized by a physical brain
machinery, and by this to alleviate the long-standing mind-body problem
(Rorty, 1979). Serving dedicated computational functions, the cognitive
performance of the brain/mind can then be quantified by the amount,
speed, and precision of information processing. Even our emotions can
then be described in the framework of economic computational principles,
namely as error signals minimized by machine learning algorithms to opti-
mize computational performance (Glimcher et al., 2008).
However, whereas the performance of computers steadily increases, hu-
man cognitive performance remains strictly bound, and can hardly be opti-
mized. Thus, computational measures of cognitive performance like intelli-
gence, memory span, and perceptual precision show only little improvement
by training, and remains prone to errors. The brain/mind rather seems to be
optimized on the time scale of biological evolution, and therefore appears to
be outpaced by the development of information technology yielding the im-
pression of the brain/mind to be maladapted to modern environments. More-
over, whereas computation as a disembodied process abstracted from its
physical implementation can be everlasting, human cognition declines with
age and disease reaching its ultimate end with death. Against the background
of increasingly powerful computers, our brain/mind is thus getting in deficit.
Used as tools, machines can only externally compensate for this deficit. But
with the brain/mind envisaged as a machine itself, we apparently have the
opportunity to cancel this deficit by expanding the brain/mind’s machinery
with technical devices interfacing its internal processes. Such brain-machine
MATTHIAS DELIANO
142
interfaces then could directly augment the functionality of the brain/mind
serving as an prosthesis for our internal cognitive system. By the concerted
technical expansion of our body, brain, and mind, as proposed by
transhumanist movements, we could enhance our limited human perfor-
mance to overcome our biological destiny, and finally even reach posthuman
immortality by uploading our conscious mind from the brain to a disembo-
died whole-brain emulation running on a renewable and cosmically distri-
buted computer (Kurzweil, 2012). At this point, the project of conceiving hu-
man body and mind as a machinery exposes itself as a transcendental, futu-
ristic project, which not only drives the technological convergence of
nanotechnology, biotechnology, information technology, and cognitive sci-
ence, but also exerts tremendous political and economical power. Thus, with
the Human Brain Project (HBP), the European Community provides 1 bil-
lion Euro funding for the development of a whole-brain emulation in a super-
computer, although most experts heavily doubt its feasibility.
In a world more and more dominated by machines, all this highlights
the importance of reflecting and clarifying our increasingly intimated rela-
tionship to machines. In the development of brain-machine interfaces, this
relationship is brought to an extreme, which makes them an interesting
case for exploring the dependencies between human nature and artificial
devices (Deliano, 2010).
The state of the art of brain-machine interfaces
Brain-machine interfaces have been developed since the 1950s mainly in
the field of medicine, and some of them are already successfully applied in the
clinic as so-called neuroprostheses, today. Here, brain-machine interfaces pro-
vide solutions to the fundamental neurological problem that in the adult mam-
malian central nervous system the capacity for the intrinsic repair of damage
following destructing inflammation, degeneration, or injury is, compared to
other parts of the body, quite limited. Thus, in the central nervous system, neu-
ral tissue lost through damage is hardly replaced. Although recent findings in-
dicate that neurogenesis from endogenous stem cells occurs in certain regions
of the adult brain, the number of newly generated neurons may not be suffi-
cient to replace lost neuronal tissue (Braun & Jessberger, 2013). Even though
the brain is highly plastic, and can compensate for some brain damages to an
amazing degree, little damage to certain brain regions still can have devasta-
ting effects on a subject’s perceptual, motor and cognitive performance. Clas-
sical treatment of the resulting symptoms consists of substituting rather than
restoring the impaired or lost function by external prosthetic tools, outside the
nervous system. This way, deaf patients do not acquire new hearing but learn
lip reading, blind patients do not acquire new vision but learn Braille-reading,
and paralyzed patients do not reacquire their movement ability, but learn to use
a wheelchair instead. The alternative is the internal restoration of neural func-
tions by technical devices interfacing selected parts of the nervous system,
so-called neuroprostheses (Ohl & Scheich, 2007).
ON THE INTIMATE RELATIONSHIP BETWEEN MAN AND MACHINE
143
Commonly, the interface consists of electrodes chronically implanted
into the brain (Fig. 1A, B, C), through which electric brain activity can be
either recorded or stimulated allowing for causal interactions with the
brain. Thereby, the aim is to establish spatially and temporally specific
electrical contacts to as many brain cells as possible. This lead to the
nanotechical development of miniaturized electrode systems with up to
1000 electrode contacts. Integrated with amplifiers and stimulators, these
electrode systems yield brain chips, which can be durably implanted into
the brain without major damage, and which can be controlled wireless from
outside the skull (Grill et al., 2009). However, brain-computer interfacing
might be further revolutionized by a new technique called optogenetics, by
which the gene sequences of light-sensitive proteins derived from certain
types of algae and bacteria are introduced into brain cells through well con-
trolled transgenic modifications (Yizhar et al., 2011). Brain cells expressing
these proteins can then be selectively activated or suppressed by light de-
livered to the brain via ultrafine optic fibers (Fig. 1E). This technique al-
lows to target brain cells with certain functions, and to control their electric
activity in a much more specific way than electric stimulation (Fig. 1F).
Independent from these hardware aspects, the design of brain-machine in-
terfaces generally rests upon the assumption that the brain from its sensory input
generates internal representations of the reality encoded in the electrical activity
of the brain cells. In transforming the encoded information through neural com-
putation, new internal representations are formed, by which the brain can solve
problems, mediate decisions, and as a final result generate motor output, in order
to intentionally change the outside world based upon its neural representations
(de Charms & Zador, 2000). In most current approaches, brain-machine inter-
faces aim at accessing these internal representations by the direct interaction
with the electric brain activity via an electric or optogenetic interface. Central
sensory neuroprostheses for example, are devised to directly encode sensory in-
formation into the brain/mind system by electrically stimulating brain cells
(Tehovnik & Slocum, 2013). In bypassing damaged sensory brain parts, lost
neural functions can be restored. Properties of external stimuli in the brain are
thereby often thought to be encoded in topographically organized map represen-
tations, with neurons at a certain location in the map responding best to a specific
stimulus parameter. Such map representations are often found in a brain struc-
ture called cortex, which builds the folded surface of the brain, plays an impor-
tant integrative role in most cognitive phenomena, and is often regarded as con-
stituting the highest processing level in the hierarchy of the brain. The primary
visual cortex, for example, forms such a map of the visual field. Neurons within
this map are optimally recruited by the stimulation of the corresponding site in
the visual field. Accordingly, electric stimulation of a site in the cortical map
elicits the perception of a dot of light, a so-called phosphene, located at the site of
the visual field represented by the stimulated map locus. Already in 1953, Krieg
(Krieg, 1953) proposed that based on this map organization, spatial patterns of
electric stimulation delivered to visual cortex could yield a single coherent raster
MATTHIAS DELIANO
144
image of phosphenes, which could be used to restore vision in the blind. Various
interfaces for visual, auditory and somatosensory cortices have been developed
since then, in order to restore lost sensory functions. Although, none of them has
yet reached the level of clinical applicability.
On the other hand, brain-machine interfaces reading out information
from the brain to restore lost motor functions are much more successful.
Thus, neural activity recorded from multiple electrodes in the cortex can be
used to reconstruct 3 dimensional arm movements (Hatsopoulos &
Donoghue, 2009). These movements can be decoded even if they are only
intended, without being actually carried out. It has been demonstrated that
via such motor interfaces, paralyzed patients who are not able to move their
limbs anymore, can actually operate external devices like a robotic arm by
ON THE INTIMATE RELATIONSHIP BETWEEN MAN AND MACHINE
145
Figure 1: Brain implants: Electrode arrays (A,B,C) and optogenetic systems (E)
for the recording (see Fig. 2) and stimulation (F) of the brain cell’s electric activity
used in human brain-machine interface technology (D) [(A) to (D) from Fig. 1
Hochberg L.R. et al. (2012), Nature: 442, (7099); (E) from
http://www.stanford.edu/group/dlab/optogenetics/; (F) from Fig. 2 in Deisseroth, K.
mere intention, and reach a goal like eating a piece of chocolate (Collinger et
al., 2013, Fig. 1D). By combining sensory and motor neuroprostheses (Fig.
2A, B), one might then actually devise whole-body neuroprostheses, which
replace large parts of the body by rerouting its sensorimotor feedback via a
whole-body exoskeleton or a robot (Lebedev & Nicolelis, 2009). Also, first
steps are taken towards neuroprostheses for replacing central, cognitive
brain functions. Though far from being applicable, a brain chip is currently
under development, which aims at emulating the complex functions of the
hippocampus, a brain structure that plays an important role in memory for-
mation (Berger et al., 2011). Decoding hippocampal input, then artificially
carrying the hippocampal computations, and finally feeding back the trans-
formed information to the output structures of the hippocampus, such a brain
chip once could replaced lost hippocampal functions, and by this alleviate
severe memory deficits occurring for example with neurodegenerative di-
seases like Alzheimer’s. Finally, neuroprostheses are also designed for sup-
pressing unwanted, pathological brain states by modulating the activity of
target structures deep in the brain. Target structure include motor structures,
but also so called limbic structures involved in emotional processes Besides
largely reducing Parkinsonian tremor as a brain pacemaker interfacing motor
structures, deep brain stimulation of limbic structures has been demonstrated
to be capable of suppressing unwanted symptoms of depression, obses-
sive-compulsive disorder, and addiction (Hoy & Fitzgerald, 2010).
Cyborg metaphors
As it becomes apparent from the research projects described above, the
scope of brain-machine interface technology reaches far beyond the deve-
lopment of neuroprosthetic applications for the treatment of specific
neuropathologies and disabilities. Although brain-machine interface techno-
logy today still concerns only a very small community of ill or handicapped
persons, the borderline between pathological or disabled, and healthy states is
rather fluent. Likewise, the step from restoring lost functions to augmenting
normal functions is quite small. Many of us might accordingly become in-
cluded into the group of potential users of this technology in the future. But ir-
respective of whether we will actually be carrying such devices or not, brain
machine interfaces concern us in a deeper way. By directly intervening into
our brain, which we see as the seat of our perception, our actions, our cogni-
tion, and our emotions, brain-machine interface technology touches our soul.
Creating nearly biblical miracles in letting paralyzed people walk, blind peo-
ple see, or deaf people hear again, already current neuroprosthetic technology
nourishes our transcendental, spiritual desires, as described at the beginning.
Together with the promise of technical progress and innovation, this
technology strongly connects to our future expectations of what it means to
be human. Therefore, brain-machine interface technology, since it ap-
peared on stage during the last century, has inspired science fiction fanta-
sies in numerous novels, movies and computer games, irrespective of being
MATTHIAS DELIANO
146
feasible or actually providing suitable applications. These fantasies have in
turn strongly driven technological development, and with the recent ad-
vancements seem to reenter our reality. The role model in this fantastic
story is the fictitious character of the cyborg, a cybernetic organism, a hy-
brid of machine and organism. The term cyborg has been invented in 1960
by the medical engineer Manfred Clynes and the psychiatrist Nathan Kline
to describe their vision of augmenting the human body by technical devices
to better adapt to space travel (Clynes & Kline, 1960).
Since then, the cyborg has been developed into a science fiction protago-
nist that stands for the utopian and dystopian views, the hopes and fears related
to the transformation of our human nature by artificial, technical devices. In
the utopian view, the intimate coupling with machines strengthens our limited
self by equipping our body, our brain and our mind it with superhuman abili-
ON THE INTIMATE RELATIONSHIP BETWEEN MAN AND MACHINE
147
Figure 2: Current conceptions and working principles of brain-machine
interfaces: (A) The agent-world circuitry underlying brain-machine interfaces. (B)
Decoding of motor intentions. (C) «Ratbots» [(A) and (B) from Fig. 6 in Hatsopoulos
N.G., and Suminski A.J., Neuron (2011): Volume 72, Issue 3, Pages 477–487; (C)
Illustration Dr. John Chapin/Meritum Media]
ties. Extending and enhancing the performance of our mind, it is above all the
brain-machine interfaces that empower us to gain the dominion over the
world, and over our biological destiny. Beyond the medical treatment of
pathological states, the development of such neuroenhancement strategies are
already today inherent to many research projects on brain-machine interfaces.
On the other hand, in the dystopian view, this technology violates our self, our
brain, and our body, and makes us suffer. Here, brain-machine interfaces pro-
vide ways for others to take over the control of our mind and our actions,
maybe even without being noticed by us. Such a scenario does not seem to be
too farfetched, as suggested by the “ratbot” experiment of Talwar and col-
leagues (2002), which has provoked a highly controversial debate about the
potential dangers of brain-machine interface technology. In this experiment,
the navigation of a rat through a three-dimensional maze could be remote-con-
trolled via a brain machine interface (Fig. 2C). To move the rat forward, the
experimenters delivered electrical stimulation of mesolimbic structures deep
in the brain, which are known to drive appetitive seeking behavior. Virtual
touch sensations at the rat’s left or right whiskers evoked by electric stimula-
tion of the corresponding representations in somatosensory cortex were used
as signals to turn the animal either left or right. Today, research on the re-
mote-control of animals is pursued in the field of military research largely hid-
den from the civil scientific community. The aim of this research is to create
“animal-bots” that can spy-out enemies in carrying a camera, remove
land-mines, or even place such explosive weapons in the enemies territory.
However, the fictitious figure of the cyborg is not just a prospect of our
technologically determined future. Both, as utopian superhero and as
non-human monster, the character of the cyborg radically puts into question
the location and the boundaries of our mind- and body-self (Haraway, 1991).
It questions our western conviction that our mind is enclosed within our
physical brain in our head, and that action and perception by which the mind
interacts with the world is related to our physical body. With the conception
of body, brain and mind as computational machine, the functions of mind
and body can be extended to technical devices via an interface. Then the
boundaries of mind- and body-self are merely determined by the reach of
these devices capable of transcending all biologically predetermined tempo-
ral and spatial limits. However, without boundaries, it also becomes increa-
singly difficult to determine what actually belongs to this self, and what to
the external world. The dissolving boundaries finally leave the operations of
brain, body and mind without meaning, as it makes no sense to talk about a
human self anymore. Freed from all limitations and constraints, the human
agent as an entity stops to exist. Interestingly, cyborgs in science fiction are
never fully transformed into a machines, but preserve a rest of humanity in
being irrational, intuitive, empathic or desperate, in suffering from fear and
pain, or in being mortal. This a precondition for the cyborg to exist. Re-
moving the limited, vulnerable, and mortal residual subject would simply
turn the cyborg into a meaningless entity, a trivial and boring machine.
MATTHIAS DELIANO
148
In the figure of the cyborg a dichotomy comes into view: while concei-
ving body, brain and mind in terms of an universal, disembodied, rational, ob-
jective machine, we still experience ourselves as situated, affective, embodied
subjects. In this dichotomy it becomes apparent that the relationship between
humans and machines is only metaphoric. Brains and bodies actually are not
machines. Rather machines are designed by humans serving their purposes.
However, both scientific and folk conceptions of mind, brain, and body
heavily draw on such metaphors, because it is through metaphors that concepts
and explanations get productive and intelligible (Lakoff & Johnson, 1980). So
what are brains and bodies, if not machines? The cyborg herein gives us reason
to reconsider and to reconfigure the prevailing human-machine metaphors, to-
gether with the implicit conceptual presuppositions they come along.
Reconsidering the brain-machine
Current machine conceptions of brain, body, and mind originate from
modern neuroscience. This highly heterogeneous field of research is a much
less theory-based discipline like for example physics. It is an interdisciplinary
undertaking that pursues many parallel lines of research on many different le-
vels of observation. Neuroscience herein not only tries to explain the brain’s
physiology, but to relate it to a psychological description of behavior and cog-
nition. Based on the conviction that the mind is somehow generated by the
brain, neuroscientists seek for neural correlates of psychological phenomena
like perception, learning, memory, attention, decision making, and action of-
ten with the aim of establishing an isomorphic, one-to-one relationship be-
tween physiological and psychological phenomena. However, the laws de-
scribing physiological and psychological phenomena are generally not com-
parable. In its effort to integrate different levels of observation and explanatory
domains, brain science therefore is prone to category mistakes committed by
projecting explanations at one level of observation, to another, incommensura-
ble level. The brain for example does not perceive, act, or learn anything like
the cognitive agent it is part of (Bennett & Hacker, 2008). Still, a link between
physiology, perception, action, and cognition can be established by employing
the conception of causality. Via causal relations, more genuine bridges be-
tween levels of observation and explanatory domains can be built.
In this respect, the notion of a computational brain operating on neural
representations of the world, which is at the heart of brain-machine interface
technology, is commonly flawed. Computational approaches rely on informa-
tion theoretic concepts that describe information in statistical terms devoid of
any semantic aspects, in order to quantify and optimize the transfer and the al-
gorithmic transformation of information. As Claude Shannon, one of the
founders of information theory, noted: “The fundamental problem of commu-
nication is that of reproducing at one point either exactly or approximately a
message selected at another point. Frequently the messages have meaning;
that is they refer to or are correlated according to some system with certain
physical or conceptual entities. These semantic aspects of communication are
ON THE INTIMATE RELATIONSHIP BETWEEN MAN AND MACHINE
149
irrelevant to the engineering problem”. For the computer, these semantic as-
pects can be provided by its users, but in the brain, there is no such user, which
could attribute neural representations with meaning. The neural representation
and maps targeted by brain-machine interfaces therefore carry the information
about the world only in the eye of the observer. They are obtained by correla-
ting neural activities with a set of observables in the world, which does not
even allow for creating a causal link between the events in the world and the
brain. Although, correlation is a necessary prerequisite for causality, it is not
sufficient for it. Thus, correlations are highly biased by the selection of the
observables through the experimenter, and might be simply spurious due to the
contribution of non-observed factors. The following example illustrates this:
in Europe the body weight of the human population is negatively correlated
with hair length. Though, this is not a causal relationship, but relies on a third
factor, namely the gender differences in the population: women having a lower
body weight often also have longer hair.
But even if a causal link between neural representations and the world
can be established, this would still run into the problem that humans are not
perceiving or acting on a representation of the world, but that they perceive
and act on the world itself without being mediated by a kind of internal mir-
ror image or model (Bennett & Hacker, 2008). As Rodney Brooks, a lea-
ding expert in robotics, puts it: “The world is its own best model”. Still,
correlative and causal dependencies between neural activities, and events
in the external world yield important insight for neuroscienctists, as they
can provide the experimenter with information about the brain’s structure
and its dynamical states, even though the brain does not and cannot exploit
these dependencies in relation to the external world, as it can be done from
the stand-point of an external observer.
If brain-machine interface technology rests on a flawed conception, why
do state of the art interfaces still work, and yield suitable applications? Via the
optical fibers or electrodes these interfaces causally interact with the brain by
stimulating or recording electric nerve cell activity. To explain the working
principles of brain-machine interfaces, further causal links between the inter-
faced neural activities and the restored, enhanced, or simply altered cognitive
phenomena have to be established. However, this is not a trivial task. With its
massive reciprocal feedback connections, the linear causal chains we are used
to employ in our explanations fail to describe the operations of the brain. This
requires concepts of causality which include an understanding of circular
cause and effect relationships. Linear systems theory has developed such con-
cepts for linear feedback operations (Freeman, 1975). However, this theory
does not exactly hold for the brain’s operations, which are highly nonlinear.
Nonlinear feedback can be described in terms of nonlinear dynamics and
chaos theory, but these theories are only designed for the solution of low-di-
mensional problems that are stationary in time. Therefore these theories do not
apply well to the brain. With its rapidly changing states the brain is highly
instationary, and with its large mass of brain cell connected via abundant dis-
MATTHIAS DELIANO
150
tributed feedback and feedforward connections operates in a high-dimensional
state space (Freeman, 2000a,b). Moreover, as noise and fluctuations play an
important role for the brain dynamics, stochastic descriptions have to be in-
cluded as well into brain theory. The brain can therefore be regarded as a non-
linear, instationary, high-dimensional, dynamic, and stochastic system. Cur-
rently, there is no theory that could fully describe such a system.
Reconfiguring the brain-machine
Still, the conceptualisation of the brain as a dynamical system has
proven to be useful. In the growing field of neurodynamics, first steps to-
wards an understanding of the brain on the basis of dynamic systems theory
have been taken through linear approximations, and by numerical com-
puter simulations (Freeman, 1975, 2000a,b). Neurodynamics, investigates
the changing spatial and temporal distributions of neural activities based on
the causal interactions in the brain. Spatiotemporal patterns of neural acti-
vity can be formalized as dynamic states in the brain system’s state space.
The state space thereby must not be confused with physical spacetime, but
describes the possible dynamic behaviors and changes of the system along
the dimensions of the causally relevant factors.
Notably, the brain as a system can be described on many different levels
of observation. Modern neuroscience investigates proteins and genes in the
brain on a molecular level, synapses on a subcellular level, neurons on a cellu-
lar level, microcircuits made up by small arrangements of different neurons,
larger networks including millions of neurons, whole brain regions, as well as
hierarchies of such brain regions forming global networks connected via neu-
ral pathways. Behavior and cognition could then be regarded as the ultimate,
macroscopic level of brain function. Regarding neurons as the building blocks
of the brain, the aim is often to causally explain the macroscopic cognitive
operations of the brain on the microscopic level of single neurons. Like with
the elementary particles in newtonian physics, it is thereby assumed that all
causal influences in the system emanate from single neurons and their interac-
tions, and that explanations on this microscopic level are the most fundamental.
To causally link all these levels, it has proven to be helpful to create
bridges between microscopic and macroscopic levels via an intermediate,
mesoscopic level constituting an original domain of explanation free from
purely microscopic or macroscpopic properties. Statistical thermodyna-
mics developed in the 19th century is a good example for such a
mesoscopic bridge. In providing an at that time revolutionary statistical de-
scription of ensembles of particles at a mesoscopic level, it allowed to create
a causal link between the micoscopic level of Newtonian particle move-
ments and the macroscopic phenomenon of temperature. Why not creating
a similar bridge between the activity of neurons and cognitive phenomena?
The first hard problem encountered on this way is to create a bridge be-
tween the microscopic actions of single neurons and the macroscopic ac-
tions of global brain regions and networks related to cognition. Here, the
ON THE INTIMATE RELATIONSHIP BETWEEN MAN AND MACHINE
151
mesoscopic description of the mass action of large neural ensembles is an
important step towards creating more causal links between the brain’s ac-
tivity and perception, behavior, and cognition (Freeman, 1975, 2000a,b).
A mesoscopic description of brain activity can be obtained in animal
studies by recording fieldpotentials in the brain, which reflect the mean elec-
trical activity of 100.000s of neurons around the recording electrode. In sen-
sory cortex, fieldpotentials recorded from many electrodes display
mesoscopic spatiotemporal activity patterns. These complex patterns repea-
tedly emerge from the ongoing activity, and cannot be discarded as noise
(Lilly, 1954). Though, no systematic relationship between these activity pat-
terns and the sensory input could be found. Recording from 400 electrodes
DeMott (1966) suggested that sensory input “is presented to the cortex not as
a map, but as a very complex spatial-temporal sequence, in which every part
of the cortex participates in displaying information from every part of the
[sensory] field” (DeMott, 1966, p. 29). The work of Walter Freeman and our
own work has shown that such patterns are induced by external stimuli
which have a meaning for the animal (Freeman, 2000a, Deliano et al.,
2009b). Emerging from the ongoing eigenactivity of the brain, these patterns
are not driven or determined by external stimuli like the patterns that can be
evoked as direct stimulus response. Ongoing patterns do not form map repre-
sentations of stimulus features like the evoked patterns (de Charms & Zador,
2000). Whereas evoked patterns are topographically organized, and covary
with the physical stimulus parameters, ongoing patterns are distributed over
a large area, and covary with the individual situation of the animal. Whe-
never the behavioral situation, and hence the meaning of the stimuli changes,
e.g. by learning, the ongoing patterns change as well, even if the presented
stimuli remain physically the same (Freeman, 2000a). When animals learn to
sort physically different stimuli into the same category, then these patterns
reflect the learned category, but not the physical features of the stimuli
(Ohl et al., 2001). Physiologically, these patterns are carried by the ampli-
tudes of ongoing distributed neural oscillations in the so-called gamma-band
(~20-80 Hz). They emerge within a few milliseconds, persist for a few
100 milliseconds, until they dissolve and give rise to a new pattern.
Walter Freeman (2000b) has worked out a comprehensive
neurodynamic theory on these mesoscopic patterns, which explains their
generation by the self-organized mass action of 100.000s of single neurons.
During the existence of the pattern-state, the degrees of freedom of the dy-
namics momentarily governing the cortex are largely reduced, which locally
lowers the brain’s entropy. Due to the second law of thermodynamics, this is
only possible because the brain is exchanging energy and matter with its sur-
round. As an open, dissipative system, the brain is therefore capable of crea-
ting order from chaos and noise. It turns out that this self-organization cannot
be simply explained by a bottom up causality emanating from the micro-
scopic level. As proposed in one of the most successful theories on self-orga-
nization developed by Hermann Haken (1983), called synergetics, this re-
MATTHIAS DELIANO
152
quires a conception of circular causality operating across levels of observa-
tion. The microscopic elements of the system like the neurons in the brain
thereby causally influence the formation of the mesoscopic pattern, due to
their interactions. However, the mesoscopic pattern as well constrains and
enslaves the behavior of the microscopic elements. Macroscopic brain states
arising from the mesoscopic pattern states therefore are not only a result
of the microscopic actions of neurons, but vice versa have a strong causal in-
fluence on the microscopic activity. Hence, mesoscopic neurodynamics is
not only seeking for explanations on the level of singe neurons, but also on
the level of more global brain states and patterns that constitute order param-
eters governing the dynamics of the brain (Haken, 1983).
Physical theories of self-organization explain how macroscopic pat-
terns are formed. However, in the instationary brain such patterns are
steadily formed and destroyed preventing the system from becoming
trapped in a certain state. Such an itinerant alternation of order and disorder
can be achieved by systems capable of organizing themselves into critical
states, from which they are repeatedly kicked into ordered pattern states by
internal random fluctuations or external perturbations. The capacity for
self-organized criticality relies on the scaling properties of the system. It is
typically found in fractal, i.e. self-similar systems. In the brain, self-similar
states can be found over many different spatial and temporal scales ranging
from ten to a few hundred milliseconds, and from milimeters to centime-
ters. In the alternation of order and disorder resulting from its fractal orga-
nization, the brain can generate sequences of dynamic states in a highly
flexible manner. The important role of noise and fluctuation thereby af-
fords to further extend the conception of causality by allowing causal rela-
tionships to exert their effects not only on deterministic variables, but also
on the probability distributions of stochastic variables.
At this point, it should be noted that theories of nonlinear dynamics,
self-organization, and self-organized criticality have been fully worked out
only for comparatively simple physical systems like lasers, but not for the
brain, yet. Here the descriptions rather provide new metaphors, which is
however of great importance for the development of new conceptions of the
brain. As Walter Freeman once stated: the hurricane with its spiral patterns and
turbulences serves a much better metaphor for the brain than the computer.
Such dynamic metaphors are also much less prone to the aforementioned cate-
gory mistakes than many of the still widely used computational metaphors.
The appearance of self-organized states requires constraints and boun-
dary conditions like the borders of a containment in which a pattern-forming
chemical reaction is carried out. In physico-chemical systems, these boun-
daries are imposed by the experimenter. In the brain, such boundaries are
constituted by the brain’s sensory surfaces connecting it to the sensory or-
gans. The boundary conditions for self-organization in the brain might there-
fore be imposed on the brain by the external world via its sensory surfaces.
However, the brain is capable to actively influencing its sensory surfaces,
ON THE INTIMATE RELATIONSHIP BETWEEN MAN AND MACHINE
153
and the sensory organs. Sensory brain regions are not purely afferent struc-
tures receiving external input, but send back massive efferent feedback pro-
jections all the way down to the sensory organs. For example, the auditory
cortex often viewed as the end point of the auditory pathways ascending
from the ear, can exert mechanical influence on the inner ear via cortico-ef-
ferent neural projections, which in turn alters sensory transduction. Sensory
parts of the nervous system are therefore not passive receivers or transmitters
of external information, but actively control their own sensory state. The
brain can therefore determine its own boundary conditions. As has been
pointed out by the biologists Humberto Maturana and Francisco Varela
(1992), this marks the crucial difference between physico-chemical systems
and living systems like the brain. The latter are not only capable of self-orga-
nizing into pattern states, but also of self-generating their own conditions of
existence, i.e. their metabolic, morphologic, and sensory boundary condi-
tions. According to Maturana and Varela, living systems can be defined as
autopoietic systems. Being operationally closed, autopoietic systems have
an identity defined by their own operations (Rudrauf et al., 2003). Being an
autonomous entity, it makes no sense to externally ascribe functions to a li-
ving, autopoietic system. Even though, such functional description can pro-
vide valuable means for external observer to deal with the living systems.
But still, this does not provide an explanation for the operations of living sys-
tems, which can only be understood in terms of its internal causal interac-
tions. However, there is a third way to gain an understanding of living sys-
tem, which consists in sharing a world with it through coevolution.
As reflected by its autonomous, self-organized eigenactivity, the brain is
an autopoietic, living system, which can be only perturbed but not driven or in
any way determined by external input. As the neurobiologist Amos Arieli
nicely describes it: “…the effect of a stimulus might be likened to the addi-
tional ripples caused by tossing a stone into a wavy sea” (Arieli, 1996). In an
autopoietic brain, mind control and mind reading via a brain-machine inter-
faces appears unfeasible. Thus, there is no content that can be read out from the
brain, as mind reading would afford, because the brain does not harbor an in-
ternal world or create intentions that could be accessed via an interface. Also,
there is no way of inscribing information into such a system via an interface, as
would be required for mind control. The brain can only be causally perturbed
via the interface, but the outcome of this perturbation is solely determined by
the brain. To unravel the working principles of brain-machine interfaces, this
leaves us with the task to study the causal interactions between the interface
and the ongoing brain dynamics more thoroughly (Deliano et al., 2009a).
Even though, the autopoietic organization of the brain has fostered
constructivist conceptions of the brain as creating its own virtual realities.
However, these conceptions run into the same problems already described
for the representationalist accounts (Bennett & Hacker, 2008). The brain
does not create an internal world neither as model of the external world nor
as an emulated virtual reality. Otherwise, this would leave us with a mysti-
MATTHIAS DELIANO
154
cal brain that creates a ghost in its machinery. Again, this is not to say that
the brain’s operations are not correlatively and causally linked to cogni-
tion. The brain is just not the place of cognition, and it does not define the
boundaries and functions of cognition, but merely operates on its own neu-
ral states governed by its internal dynamics. At this point we simply have to
let go our conviction that the mind is in the brain in our head. But if it is not
in the brain anymore, where has the mind gone, then?
Extending the mind into the world
Although being an operationally closed dynamic system, the brain is not
like a solipsistic monad hanging in a vacuum. The brain is deeply immersed in
the physiology of the body, not only via its sensory surfaces. It is downrightly
bathed in the milieu of the body, and exchanges with it energy, building blocks
of its morphology, and regulatory signals. As a dynamical system the brain can
then be viewed as embedded into the body, which is in turn embedded in the
external environment (Chiel & Beer, 1997). The dynamics arising from this
embedded system is characterized by various distributed feedback loops that
not only operate within the brain, but give rise to couplings across the borders
of brain, body, and environment (Beer, 2000). Due to their self-referential ac-
tion, these couplings constitute a higher-order autopoietic system which is ca-
pable of creating an autonomeous self, an agent (Rudrauf et al., 2003). In the
view of embodied, situated cognition, what we call mind can be understood in
terms of the dynamic operations of this agent (Varela et al., 1992). Once more,
the agent does not have a mind, it does not have perceptions, memories, inten-
tions, qualia or mental representations (Bennett & Hacker, 2008). Also it does
not serve a function, an external observer might be inclined to ascribe to it.
Rather, by actively generating its own order-states, and in being situated in the
world, such an agent directly perceives, memorizes, thinks, feels, intends, de-
cides, and does other cognitive things alike. Its operations are mediated by the
world itself, and not by some mental representations.
It is in this embodied, situated framework, that the brain’s causal rela-
tion to cognitive and experiential phenomena can be fully appreciated.
Here, the brain forms a dynamic core of the agent, which strongly shapes
its dynamics (Rudrauf et al., 2003). In constraining the behavior of the
agent, the brain as a condensation nucleus of order serves to maintain its
existence, or in dynamical terms maintain the order-states that define the
agent’s way of life, its being there. Based on pre-afferent recurrent feed-
back operations, the brain is capable of predicting its own neural states, and
therefore can extend the actions of the agent into the future (Freeman,
2000c). In leveling the deviations from the expected neural states arising
from fluctuations within the brain, or from perturbations through body and
environment, the brain maintains the agent in a state of order, while at the
time allowing for an evolution of order-states that adapts the agent to a ra-
pidly changing environment (Friston, 2010). This is achieved either by ac-
commodating the neural predictions in changing the order-states of the
ON THE INTIMATE RELATIONSHIP BETWEEN MAN AND MACHINE
155
agent, or by initiating actions through which the agent preserves its current
order by assimilating the changes in its environment.
Cognition then arises from an extended ecological cognitive system
made up by an agent evolving in a cognitive niche of its environment (Clark,
2010). On the one hand, the agent adapts to the constraints imposed by the
cognitive niche, on the other hand, the agent actively constructs the niche
through its cognitive actions. For an external observer, the coevolution of
brain, body, and environment therefore creates the impression that the agent
perfectly matches the world it lives in. It seems as if the agent is designed for
living in its niche with all its functions purposely adapted to the niche.
Through embodied cognition the agent can therefore make use, and
to deeply integrate the external world into its cognitive operations. By an on-
going coupling with the world through actions like eye-movements, the agent
can obtain information about the world just on demand, without out the need to
construct a detailed, compound world model. The sense organs would not al-
low for such a detailed description of the world, at an instant of time, anyway.
The retina, for example, only provides a very narrow area of sharp central
color vision of about a size of a euro cent. Still, we have the impression to see
a full, detailed visual scene. Of course, we might reconstruct the scene by
gathering successively foveated parts of it. However, as demonstrated by the
striking phenomenon of change blindness, we do not create such a detailed
compound representation (Noë, 2005). Thereby, large changes even in the
central parts of a visual scene can go unnoticed by a subject visually exploring
the scene. The analysis of the eye-movements during such a task reveals that
subjects repeatedly look at those parts of the scene being meaningful to them,
and ignore irrelevant parts that do not grab their attention. Through our
eye-movements we do not systematically scan the scene, but actively retrieve
its meaningful aspects. In the act of vision, we thereby apparently rely on ex-
pectations, which arise from our implicit knowledge about sensorimotor de-
pendencies learned from exploring visual scenes. Through these expectations,
as proposed by the philosopher Alva Noe (O’Reagn & Noë, 2001), parts of a
visual scene or an object can still be present for us, even if we are not currently
looking at them. The invisible parts of scenes and objects are right before our
eyes, just because we know how to bring them into view.
The conception of an embodied mind herein brings into fore the experien-
tial dimensions of our mental live, which are often neglected (Hurley & No¸,
2003). As can be exemplary seen from the experiments on change blindness,
embodied actions nicely explain many aspects of our lived experience. The
framework of embodied cognition therefore allows for drawing more direct
causal links between the agent’s lived experience and the cooperate dynamics
of brain, body, and environment giving rise to the embodied actions of the
agent. To create such links would however afford to assess the lived expe-
rience through introspection. Though introspection methods are scientifically
underdeveloped, since they have been discarded as unscientific from cognitive
psychology for more than a century. However, in a research project initiated
MATTHIAS DELIANO
156
by the late Francisco Varela called neurophenomenology, more disciplined
first person accounts are under development, which ground in the tradition of
philosophical phenomenology originating from Edmund Husserl, Maurice
Merleau-Ponty, and Martin Heidegger (Varela & Shear, 1999).
Besides for sensorimotor real-world coupling, the framework of em-
bodied and situated cognition can also account for more abstract forms of
reasoning. Thus, it can be shown that even the most abstract categories of
rational thought, e.g. in the field of mathematics and logic, are ultimately
grounded in the embodied actions of the agent (Lakoff, 1987). Further-
more, the arbitrary sign systems used in language, mathematics and logic
might serve embodied agents to exploit their capacity for real-world cou-
pling. As material entities, symbols are manipulable and could be used by
an agent as embodied stand-ins for more abstract operations (Clark, 2010).
As a dynamic, and flexible assembly, the embodied agent might not only
extend its mind to the material world, but also to other embodied agents.
The embodied framework is therefore also offering explanations for social
phenomena like empathy, bond, dance, and team-work.
Dance with the machines
Through its recurrent, world-based actions, embodied agents can learn
to integrate artifacts like tools, technical devices, signs, and symbols into
their cognitive acts. At the beginning, like when learning to drive a car, the
coupling with these devices creates an intransparent problem space. In the
lack of experience, we then often apply explicit rules to solve the posed prob-
lems. Always shifting the gears when the speedometer of the car reaches a
certain value is an example for such rules. Hence, explicit rules are often em-
ployed as an simplifying aid for novices to gather experience (Dreyfuss &
Dreyfuss, 1980). However, with growing expertise, these rules not always
apply well, anymore. Then we start to find our own ways to deal with the oc-
curring problems. At the latest, when we reach a level of mastery and exper-
tise, the problem space disappears, and the car as a an external device be-
comes transparent in its use. We just drive the car by relying on our intuitions
and our affect without having rules in mind. Emotions play an important role
here, both in constituting a problem space in a new and unknown situation,
but also in dissolving it. When a problem space opens, and if we have no
guiding rules at hand, we normally stop our actions and start to reflect upon
the situation. However, in the complex world we live, most problems cannot
be solved by rational thought, or at least there is no time for doing so. In cal-
ling us back into worldly action again, emotions can dissolve the problem
space, and prevent us from getting trapped into endless, rational, egocentric
reflections (Damasio, 2005). Thus, emotions make us to decide upon the in-
formation we have right at hand. As embodied agents we can achieve this by
creating states of order that largely reduce the complexity of the world, we
live in. Interestingly, this also dramatically changes our experience. Being an
expert car driver, the car becomes an extension of our body. We can then
ON THE INTIMATE RELATIONSHIP BETWEEN MAN AND MACHINE
157
even feel the boundaries of the car, e.g. when we come too close to another
car in danger of an collision. Embodied agents are therefore capable of
steadily creating whole new agent-world circuits (Clark, 2010). They can
learn to deeply incorporate artifacts like machines into their cognitive and
experiential realm, and hence dynamically shift the boundaries of their self.
In this respect, particularly telling experiments have been carried out in
the field of crossmodal “sensory substitution”, where it is tried to replace or
augment a lost sensory modality by transforming stimuli characteristic of the
lost modality to stimuli of another modality. For example Bach-y-Rita and
colleagues (1969) developed a tactile vision substitution system (TVSS),
which converts an image captured by a video camera into a “tactile image”
produced by a matrix of 20 x 20 vibrotactile or electrotactile stimulators. If
the camera was positioned by the experimenter, blind or blindfolded subjects
were immediately able to discriminate different patterns of tactile stimula-
tion derived from the camera images. Simple geometric shapes could be re-
cognized by the subjects after some learning. The subjects reported that they
achieved this by different successive patterns of tickling or irritating sensa-
tions on their skin at the sites of tactile stimulation, but their psychophysical
performance was poor. However, when the subjects were allowed to operate
the camera by themselves for actively exploring their environment, the mode
of perception changed fundamentally. After about 10 hours of exploration,
the subjects perceived objects in front of them neglecting the tactile input
most of the time (Bach-y-Rita & Kercel, 2003). Although the stimulation re-
mained tactile, they had shifted their mode of perception from a body-bound
tactile sensation located on their back to distal objects in the external space in
front of them. This also dramatically increased the psychophysical perfor-
MATTHIAS DELIANO
158
Figure 3: Embodied cognition: brain, body, and
environment as embedded dynamic systems [after Fig. 10
in Klein T.J., and Lewis M.A., Journal of Neural
mance of the subject: despite the limited spatial resolution of the TVSS, sub-
jects managed to localize objects in a three-dimensional space, to characte-
rize the shape of an object, and to recognize objects, even faces. This is
a striking illustration of how embodied agents can enactively shift the
boundaries of their perception, and thus the boundaries of their selves.
Instead of doing such fancy experiments, we also can observe ourselves
or others for example in using a smartphone. These devices are not ordinary
tools just used by us. In the dance of our fingers, smartphones become inse-
parably linked to the sphere of our body and our mind (Clark, 2010). In this
respect it might be not so important how deeply machines are implanted into
our body or brain. Much more important are the ways the machines are cou-
pled to agent, and how deeply machines can get integrated into the embodied
dynamics, defining the boundary between agent and world. Sensory cortex
prostheses described before, are an interesting example, in this respect
(Tehovnik & Slocum, 2013, Deliano et al., 2009). Through the direct electric
stimulation of neurons in visual or auditory cortex, the perception of dots of
light or sounds can be elicited right away, respectively. However, as we have
shown in animal experiments, the perception of these phosphenes or audenes
is not just a correlate of the activity of the directly excited neurons, but in-
volves the operation of a recurrent feedback circuitry that engages many sen-
sory, emotional and motor brain regions (Happel et al., under review;
Deliano et al., 2009a). However, clinical trials with human subjects im-
planted with prototypes of visual cortex prostheses have not yet achieved to
establish a real-world coupling that allows for seeing objects or visual
scenes. One reason for this might be that the elicited phosphenes move to-
gether with the eye. By this type of coupling, phosphenes are always per-
ceived as fixed to the eye, but not as objects in the external environment se-
parated the user. In contrast, such external objects can emerge from the cou-
plings constituted by sensory substitution devices. But here, the type of
coupling does not allow to create the sensation of light and color. For this
reason, these devices have not been much more attractive for blind subjects
than their canes, yet. Also, the intentional control of robotic devices via corti-
cal motor interfaces (Lebedev & Nicolelis, 2011) relies on a specific way of
agent-world coupling. This control is not achieved right away by the user,
but concurrently requires learning on the behalf of the agent, and adaptation
of decoding schemes on the behalf of the machine (Hatsopoulos &
Donoghue, 2009). The coevolution of agent and machine during training
then creates a match between the brain activity and the decoding schemes of
the machine, which at the end appears as mind reading through the machine.
For a brain- or human-machine interface to work properly, it does not re-
quire broad-band interfaces that transmit large amounts of information.
What is required is an agent capable of integrating the interface into its em-
bodiment. Doing so, the agent can open new communication channels with
the world. However, as we have seen, this requires effort and learning on the
side of the agent, and flexibility and adaptation on the side of the machine.
ON THE INTIMATE RELATIONSHIP BETWEEN MAN AND MACHINE
159
In an embodied agent, cognition and lived experience cannot be attri-
buted to its single parts, but rely on a cooperative interaction between
these parts. The effects of removing or altering the parts that embody the
agent then depend on their causal roles in the currently enacted
agent-world circuit (Clark, 2010). Thus, bodily damage, dysfunctions or
lesions of brain regions, or malfunctions of machines coupled to the
agent, do not simply lead to a loss or change of function added by the af-
fected part under normal conditions. If the affected parts do not belong to
the agent’s embodiment, their loss or dysfunction is irrelevant, and has no
effect on the agent’s behavior. Otherwise, the loss or alteration of integra-
tive parts of the agent will profoundly change the dynamic operations of
its remaining embodying parts, and consequently its mode of cognition
and lived experience. The fact that a smartphone when taken away, or
when not working properly might leave its user depressed, and with a
feeling of being disabled, reveals the deep integration of such devices
into the user’s embodied living. The loss and alteration of parts embodying
the agent might largely reduce the degrees of freedom of its behavior.
Still, the agent’s capacity for embodied actions, i.e. the process of putting
up new agent-world circuits itself, is quite robust. Even after removal or
damage of large parts of their body, brain, and environment, humans of-
ten can keep up their ability to maintain a lived identity. However, there
also exist environmental factors, body parts, and brain regions, which are
critical for the agent to maintain its embodied activity. If these parts are
removed, the agent stops to exist, it dies.
But not only the removal of relevant parts can restrain the actions of an
agent. Coupled devices might also profoundly disturb the embodied dy-
namics, as becomes clear from the “ratbot” experiment described above,
leaving the rat agent as an object remote-controlled via a brain-machine in-
terface (Talwar et al., 2002). From the direct stimulation of mesolimbic
brain regions, as carried out in this experiment, animals learn to display a
vigorous appetitive searching behavior interpretable as strongly amplified
intentional drive. Such a behavior is also elicited by the use of addictive
drugs that interfere with mesolimbic brain structures. Both, with
mesolimbic stimulation or drug addiction, the behavior of the agent nar-
rows down to the single goal of seeking the brain stimulation or the drug.
The subject gets trapped in a feedforward coupling, which largely reduces
the degrees of freedom of the embodied dynamics leaving the agent with
only a small number of selectable order-states. This constrains agent’s be-
havior so much, that it appears to be remote-controlled.
In the embodied framework, not only the research on brain-machine
interfaces but more broadly the research on human-machine interfaces
shifts its focus from quantitative differences in the information transfer to
qualitative differences in the perceptuomotor, emotional and cognitive
modes of embodied action which arising either from the loss and alte-
rations of the parts embodying the agent, or from the agent’s coupling with
MATTHIAS DELIANO
160
a machine. From this perspective the benefit of a human-machine interface
cannot be predefined by the researcher, but only in close cooperation with
the users of the interface (Varela, 1999).
Although, as the philosopher Andy Clark frames it, we are “natural
born cyborgs” (Clark, 2010), and even though the conception of dynamic
embodiment is still a mechanistic one, the new conceptions and metaphors
of the relationship between humans and machines presented in this article
allow us to escape from technological determinism, which ultimately will
make us cease to be human. As the cyberfeministic philosopher Donna J.
Haraway points out, we can achieve this by recognizing that “[t]he ma-
chine is not an it to be animated, worshipped, and dominated. The machine
is us, our processes, an aspect of our embodiment”.
References
Arieli A., Sterkin A., Grinvald A., & Aertsen A. (1996) Dynamics of ongoing
activity:cexplanation of the large variability in evoked cortical responses. Science
273: 1868–1871.
Bach-y-Rita P., Collins C.C., Saunders F.A., White B., and Scadden L. (1969)
Vision substitution by tactile image projection. Nature221: 963–964.
Bach-y-Rita P., and Kercel S.W. (2003) Sensory substitution and the human –
machine interface. TRENDS in Cognitive Sciences Vol.7, No.12.
Beer R. D. (2000) Dynamical approaches to cognitive science. Trends Cogn
Sci.4: 91–99.
Bennett M.R., and Hacker P. (2008) A History of Cognitive Neuroscience. –
A conceptual investigation. Wiley-Blackwell Publ., Oxford.
Berger, T.W., Hampson, R.E., Song, D., Goonawardena, A., Marmarelis, V.Z.,
Deadwyler, S.A. (2011) A cortical neural prosthesis for restoring and enhancing
memory. Journal of Neural Engineering, Volume 8, Issue 4.
Braun, S.M.G., Jessberger, S. (2013) Adult neurogenesis in the mammalian
brain. Frontiers in Biology, Volume 8, Issue 3, June 2013, Pages 295–304.
Chiel H.J. & Beer R.D. (1997) The brain has a body: adaptive behavior
emerges from interactions of nervous system, body and environment. Trends
Neurosci.20: 553–557.
Clark, A., (2010) Supersizing the Mind: Embodiment, Action, and Cognitive
Extension. Oxford University Press.
Clynes M.E. & Kline N.S.(1960) Cyborgs and Space. Astronautics 26–27.
Collinger, J.L., Wodlinger, B., Downey, J.E., Wang, W., Tyler-Kabara, E.C.,
Weber, D.J., McMorland, A.J.C., Velliste, M., Boninger, M.L., Schwartz, A.B.
(2013) High-performance neuroprosthetic control by an individual with tetraplegia.
The Lancet, Volume 381, Issue 9866, Pages 557–564.
Damasio A. (2005) Descartes’ Error: Emotion, Reason, and the Human Brain,
Putnam, 1994; revised Penguin edition, 2005.
De Charms C., and Zador A., (2000). Neural Representation and the Cortical
Code. Annu. Rev. Neurosci. 23:613–647.
ON THE INTIMATE RELATIONSHIP BETWEEN MAN AND MACHINE
161
Deliano M., Scheich H., and Ohl F.W. (2009a). Auditory cortical activity after
intracortical microstimulation and its role for sensory processing and learning.
J. Neurosci. Dec 16;29(50):15898–909.
Deliano M., and Ohl F.W. (2009b) Neurodynamics of category learning:
towards understanding the creation of meaning in the brain. New Mathematics and
Natural Computation Vol. 05, issue 01, pages 61–81.
Deliano M. (2010), Prothesen für das Gehirn: Blinde sehen, Lahme gehen,
Taube hören? In Böhlemann P., Hattenbach A., and Markus P. [Eds.] Der machbare
Mensch? Moderne Hirnforschung, biomedizinisches Enhancement und christli-
ches Menschenbild (Villigst Profile 13), Lit-Verlag, Münster.
DeMott D.W. (1966) Cortical micro-toposcopy. Med.Res.Eng 5: 23–29.
Dreyfus S.E., Dreyfus H.L. (1980) A Five-Stage Model of the Mental
Activities Involved in Directed Skill Acquisition. Washington, DC: Storming
Media.
Freeman W.J. (1975) Mass Action in the Nervous System: Examination of
Neurophysiological Basis of Adoptive Behavior Through the Eeg. Academic Press.
Freeman W.J. (2000a) Mesoscopic neurodynamics: from neuron to brain.
J Physiol Paris94: 303–322.
Freeman W.J. (2000b) Neurodynamics: An Exploration in Mesoscopic Brain
Dynamics (Perspectives in NeuralComputing). Springer-Verlag.
Freeman W.J. (2000c) Emotion is Essential to All Intentional Behaviors. In:
Emotion, Development, and Self-Organization: Dynamic Systems Approaches to
Emotional Development (eds. M.D. Lewis and I. Granic): 209–235. Cambridge
University Press, Cambridge, U.K.
Friston K. (2010) The free-energy principle: a unified brain theory? Nat Rev
Neurosci. 11(2):127–38.
Glimcher P.W., Fehr E., Camerer C., Poldrack R.A. (2008) Neuroeconomics:
Decision Making and the Brain. Academic Press.
Grill, W.M., Norman, S.E., Bellamkonda, R.V. (2009) Implanted neural inter-
faces: Biochallenges and engineered solutions. Annual Review of Biomedical
Engineering. Volume 11, pages 1–24.
Haken H. (1983) Synergetics, an Introduction: Nonequilibrium Phase Transi-
tions and Self-Organization in Physics, Chemistry, and Biology, 3rd rev. enl. ed.
New York: Springer-Verlag.
Happel M., Deliano M., Hanschuh J., and Ohl F.W. (2013) Enhanced cogni-
tive flexibility in reversal learning induced by removal of the extracellular matrix in
auditory cortex. Under review by Journal of Neuroscience.
Haraway D.F. (1991) Simians, Cyborgsand Women. Routledege, New York.
Hatsopoulos, N.G., Donoghue, J.P. (2009) The science of neural interface
systems. Annual Review of Neuroscience, Volume 32, pages 249–266.
Hoy, K.E., Fitzgerald, P.B. (2010) Brain stimulation in psychiatry and its
effects on cognition. Nature Reviews Neurology, Volume 6, Issue 5, May 2010,
pages 267–275.
Hurley S., NoëA. (2003) Neural plasticity and consciousness. Biology and
Philosophy 18: 131–168.
Kathan B. (2003) Das Elend der ärztlichen Kunst. Eine andere Geschichte der
Medizin. Kadmos Kulturverlag, Berlin.
Krieg W. (1953) In: Functional Neuroanatomypp. 207–208. Blakiston, New
York.
Kurzweil R. (2012) How to Create a Mind. Viking.
MATTHIAS DELIANO
162
Lakoff G., and Johnson M. (1980) Metaphors: We Live by. University of
Chicago Press.
Lakoff G. (1987) Women, Fire, and Dangerous Things: What Categories
Reveal About the Mind. The University of Chicago Press.
Lebedev, M.A., Nicolelis, M.A.L. (2011) Toward a whole-body neuroprosthe-
tic. Progress in Brain Research, Volume 194, 2011, pages 47–60.
Lilly J.C. (1954) Instantaneous relations between the activities of closely
spaced zones on the cerebral cortex; electrical figures during responses and sponta-
neous activity. Am.J Physiol 176: 493–504.
Maturana H., and Varela F. (1992) Tree of knowledge. Shambhala; Rev Sub
edition.
NoëA. (2005) What does change blindness teach us about consciousness?
Trends Cogn Sci. May; 9(5): 218.
Ohl F.W., Scheich H., & Freeman W.J. (2001) Change in pattern of ongoing
cortical activity with auditory category learning. Nature 412: 733–736.
Ohl F.W., Deliano M., Scheich H., & Freeman W.J. (2003a) Early and late
patterns of stimulus-related activity in auditory cortex of trained animals. Biol.
Cybern.88: 374–379.
Ohl F.W., Deliano M., Scheich H., & Freeman W.J. (2003b) Analysis of
evoked and emergent patterns of stimulus-related auditory cortical activity. Rev.
Neurosci.14: 35–42.
Ohl F.W., Scheich H. (2007) Chips in your head. Scientific American Mind:
64–69.
O’Regan J.K. & NoëA. (2001) A sensorimotor account of vision and visual
consciousness. Behav. Brain Sci. 24: 939–973.
Rorty, R. (1979) Philosophy and the Mirror of Nature. Princeton University
Press.
Rudrauf D., Lutz A., Cosmelli D., Lachaux J.P., Le Van Quyen M. (2003)
From autopoiesis to neurophenomenology: Francisco Varela’s exploration of the
biophysics of being. Biol Res.; 36(1): 27–65.
Schmidt E.M., Bak M.J., Hambrecht F.T., Kufta C.V., O’Rourke D.K., &
Vallabhanath P. (1996) Feasibility of a visual prosthesis for the blind based on
intracortical microstimulation of the visual cortex. Brain119 (Pt 2): 507–522.
Talwar S.K., Xu S., Hawley E.S., Weiss S.A., Moxon K.A., & Chapin J.K.
(2002) Rat navigation guided by remote control. Nature417: 37–38.
Tehovnik, E.J., Slocum, W.M. (2013) Electrical induction of vision. Neuro-
science and Biobehavioral Reviews, Volume 37, Issue 5, Pages 803–818.
Varela F.J., Thompson E.T., & Rosch E. (1992) The Embodied Mind:
Cognitive Science and Human Experience. The MIT Press, Cambridge, Massachu-
setts.
Varela F.J. & Shear J. (1999) The View from Within: First-person Approaches
to the Study of Consciousness. Imprint Academic.
Varela F.J. (1999) Ethical Know-How: Action, Wisdom, and Cognition Stan-
ford University Press.
John von Neumann (1958) The Computer and the Brain. Yale University
Press, 2000.
Yizhar, O., Fenno, L., Davidson, T., Mogri, M., Deisseroth, K. (2011) Opto-
genetics in Neural Systems. Neuron, Volume 71, Issue 1, pages 9–34.
ON THE INTIMATE RELATIONSHIP BETWEEN MAN AND MACHINE
163
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
In this overview article, we first explain what we take informal logic to be, discussing misconceptions and distinguishing our conception of it from competing ones; second, we briefly catalogue recent informal logic research, under 14 headings; third, we suggest four broad areas of problems and questions for future research; fourth, we describe current scholarly resources for informal logic; fifth, we discuss three implications of informal logic for philosophy in particular, and take note ofpractical consequences of a more general sort.
Article
Full-text available
Background: Paralysis or amputation of an arm results in the loss of the ability to orient the hand and grasp, manipulate, and carry objects, functions that are essential for activities of daily living. Brain-machine interfaces could provide a solution to restoring many of these lost functions. We therefore tested whether an individual with tetraplegia could rapidly achieve neurological control of a high-performance prosthetic limb using this type of an interface. Methods: We implanted two 96-channel intracortical microelectrodes in the motor cortex of a 52-year-old individual with tetraplegia. Brain-machine-interface training was done for 13 weeks with the goal of controlling an anthropomorphic prosthetic limb with seven degrees of freedom (three-dimensional translation, three-dimensional orientation, one-dimensional grasping). The participant's ability to control the prosthetic limb was assessed with clinical measures of upper limb function. This study is registered with ClinicalTrials.gov, NCT01364480. Findings: The participant was able to move the prosthetic limb freely in the three-dimensional workspace on the second day of training. After 13 weeks, robust seven-dimensional movements were performed routinely. Mean success rate on target-based reaching tasks was 91·6% (SD 4·4) versus median chance level 6·2% (95% CI 2·0-15·3). Improvements were seen in completion time (decreased from a mean of 148 s [SD 60] to 112 s [6]) and path efficiency (increased from 0·30 [0·04] to 0·38 [0·02]). The participant was also able to use the prosthetic limb to do skilful and coordinated reach and grasp movements that resulted in clinically significant gains in tests of upper limb function. No adverse events were reported. Interpretation: With continued development of neuroprosthetic limbs, individuals with long-term paralysis could recover the natural and intuitive command signals for hand placement, orientation, and reaching, allowing them to perform activities of daily living. Funding: Defense Advanced Research Projects Agency, National Institutes of Health, Department of Veterans Affairs, and UPMC Rehabilitation Institute.
Article
Philosophy has never delivered on its promise to settle the great moral and religious questions of human existence, and even most philosophers conclude that it does not offer an established body of disciplinary knowledge. Gary Gutting challenges this view by examining detailed case studies of recent achievements by analytic philosophers such as Quine, Kripke, Gettier, Lewis, Chalmers, Plantinga, Kuhn, Rawls, and Rorty. He shows that these philosophers have indeed produced a substantial body of disciplinary knowledge, but he challenges many common views about what philosophers have achieved. Topics discussed include the role of argument in philosophy, naturalist and experimentalist challenges to the status of philosophical intuitions, the importance of pre-philosophical convictions, Rawls' method of reflective equilibrium, and Rorty's challenge to the idea of objective philosophical truth. The book offers a lucid survey of recent analytic work and presents a new understanding of philosophy as an important source of knowledge.
Article
Damaged or diseased brains could soon get a boost from implanted prosthetics
Book
When historian Charles Weiner found pages of Nobel Prize-winning physicist Richard Feynman's notes, he saw it as a "record" of Feynman's work. Feynman himself, however, insisted that the notes were not a record but the work itself. In Supersizing the Mind, Andy Clark argues that our thinking doesn't happen only in our heads but that "certain forms of human cognizing include inextricable tangles of feedback, feed-forward and feed-around loops: loops that promiscuously criss-cross the boundaries of brain, body and world." The pen and paper of Feynman's thought are just such feedback loops, physical machinery that shape the flow of thought and enlarge the boundaries of mind. Drawing upon recent work in psychology, linguistics, neuroscience, artificial intelligence, robotics, human-computer systems, and beyond, Supersizing the Mind offers both a tour of the emerging cognitive landscape and a sustained argument in favor of a conception of mind that is extended rather than "brain- bound." The importance of this new perspective is profound. If our minds themselves can include aspects of our social and physical environments, then the kinds of social and physical environments we create can reconfigure our minds and our capacity for thought and reason.