ArticlePDF Available

The Creativity Machine Paradigm: Withstanding the Argument from Consciousness

Authors:
  • Imagination Engines, Inc.

Abstract and Figures

In Alan Turing’s landmark paper, “Computing Machinery and Intelligence,” the famous cyberneticist takes the position that machines will inevitably think, supplied adequate storage, processor speed, and an appropriate program. Herein we propose the solution to the latter prerequisite for contemplative machine intelligence, the required algorithm, illustrating how it weathers the criticism well anticipated by Turing that a computational system can never attain consciousness.
Content may be subject to copyright.
— Philosophy and Computers —
— 19 —
43. Ensuring a continuity of service constitutes no less than a
necessary condition of the dereferencing process. Often
monitored by third parties (instead of the institution that
published the resource and guarantees that compliance of
the representations is implemented in the long run), thus
adding an additional line of expenditure when it comes to
summarizing the efforts required to maintain the resource
over time—or, to be more precise, the coupling between
processes of dereferencing and qualification that we have
previously analyzed.
44. The lack of substances brings this very issue to the foreground:
“Why do things subsist? Once [enduring] substance has been
excluded, subsistence comes to the fore, and then the big
question is how many ways there are for the entities to graze
their subsistence in the green pastures.“ Latour et al. 2011,
48.
45. For an introduction to this concept, see Monnin 2009. More
recently, Luciano Floridi has been using a similar line of
argument.
46. Halpin & Thompson 2005.
47. The fact that reference is no longer the issue sits well with
the Web’s reluctance to deal with the notion of truth. As
already said, the epistemology of the Web is one of trust.
Content providers, including resource publishers, must
thus ensure that the definition they give of a resource (its
encoding properties) are trustful. See also Henry Thompson,
member of the TAG, who has dedicated a lot of thought to
the analysis of URI persistence: “persistent identifier efforts
can and should save huge amounts of fuss by focussing (sic]
on the non-technology substrate issues involved in producing
persistence” (Thompson 2007).
48. http://www.w3.org/TR/rdf-mt/
49. Specifically referring to RDF, Patrick Hayes called this problem
“Death by layering,” thus summarizing the issue at stake in
a most fitting way: “names have a different logical status
at different levels.” What’s true within the framework of
the Semantic Web is all the more true within the broader
framework of the Web itself here examined.
References
Arwe, J. 2011. Coping with un-cool URIs in the web of linked data.
http://www.w3.org/2011/09/LinkedData/ledp2011_submission_5.pdf
(January 31, 2012).
Berners-Lee, T. 1998. Cool URIs don’t change. http://www.w3.org/
Provider/Style/URI (April 23, 2011).
Courtine, J.-F. 1990. Suarez et le système de la métaphysique. Presses
Universitaires de France - PUF.
Courtine, J.-F. 2003. Les catégories de l’être: Etudes de philosophie
ancienne et médiévale. Presses Universitaires de France, Paris, PUF.
Erenkrantz, J. R. 2009. Computational REST: A New Model for
Decentralized, Internet-Scale Applications. Ph.D. Thesis. University of
California, Irvine.
Fielding, R. T. 2000. Architectural Styles and the Design of Network-based
Software Architectures. Ph.D. Thesis. University of California, Irvine.
Fielding, R. T. & Taylor R. N. 2002. Principled design of the modern
Web architecture. ACM Transactions on Internet Technology (TOIT)
2(2):115-50.
Floridi, L. 2005. The ontological interpretation of informational privacy.
Ethics and Information Technology 7(4):185-200.
Halpin, H. 2008. The principle of self-description: identity through
linking. Proceedings of the 1st IRSW2008 International Workshop on
Identity and Reference on the Semantic Web, ed. Paolo Bouquet et al.
Tenerife, Spain. CEUR Workshop Proceedings. http://ceur-ws.org/Vol-
422/irsw2008-submission-13.pdf.
Halpin, H. 2008b. Philosophical engineering: towards a philosophy of
the web. APA Newsletter on Philosophy and Computers 7(2).
Halpin, H. 2009. Sense and Reference on the Web, Ph.D. Thesis. Institute
for Communicating and Collaborative Systems, School of Informatics,
University of Edinburgh. http://www.ibiblio.org/hhalpin/homepage/
thesis/.
Halpin, H. & Presutti, V. 2009. An ontology of resources: solving the
identity crisis. In The Semantic Web: Research and Applications, ed.
Aroyo et al. Springer-Verlag Berlin, Heidelberg.
Halpin, H. & Thompson, H. S. 2005. Web Proper Names: Naming
Referents on the Web. Chiba, Japan. http://www.instsec.org/2005ws/
papers/halpin.pdf (May 25, 2009).
Halpin, H. et al. 2010. When owl:sameAs isn’t the same: an analysis of
identity in linked data. The Semantic Web–ISWC 2010:305-20.
Hayes, P. J., (ed.) 2004. RDF Semantics (W3C Recommendation 10
February 2004). http://www.w3.org/TR/rdf-mt/ (February 20, 2012).
Hayes, P. J. 2009. BLOGIC or Now What’s in a Link? http://videolectures.
net/iswc09_hayes_blogic/ (February 20, 2012).
Hayes, P. J., and H. Halpin. 2008. In defense of ambiguity. International
Journal on Semantic Web & Information Systems 4(2):1-18.
Husserl, E., passim.
Jacobs, I. & Walsh N. 2004. Architecture of the World Wide Web, Volume
One (W3C Recommendation 15 December 2004). http://www.w3.org/
TR/webarch/#formats (February 1, 2009).
Koepsell, D. R. 2003. The Ontology of Cyberspace: Philosophy, Law, and
the Future of Intellectual Property. New edition. Open Court Publishing
Co, U.S.
Kunze, J. 1995. RFC 1736 - Functional Recommendations for Internet
Resource Locators. http://www.rfc-editor.org/rfc/rfc1736.txt (February
20, 2012).
Latour, B. 2007. Quel cosmos? Quelle cosmopolitiques. In L’émergence
des cosmopolitiques, ed. Lolive, J. & Soubeyran, O., 69-84. Colloque de
Cerisy, Recherches, La Découverte, Paris.
Latour, B., Harman, G., Erdelyi, P. 2011. The Prince and the Wolf: Latour
and Harman at the Lse. Zero Books.
Livet, P. & Nef F. 2009. Les êtres sociaux : Processus et virtualité.
Hermann.
Monnin, A. 2009. Artifactualization: Introducing a new concept.
Southampton, United Kingdom. http://hal-paris1.archives-ouvertes.fr/
hal-00404715_v1/ (September 21, 2009).
Monnin, A. 2011. La resource et l’ontologie du Web (to be published
in Intellectica). http://hal-paris1.archives-ouvertes.fr/hal-00610652
(February 5, 2012).
Monnin, A. 2012. L’ingénierie philosophique comme design ontologique.
In Archéologie des nouvelles technologies, Réel-Virtuel: enjeux du
numérique, 3.
Sauermann, L., & Cyganiak, R. 2008. Cool URIs for the Semantic Web
(W3C Interest Group Note 03 December 2008). http://www.w3.org/TR/
cooluris/ (February 1, 2009).
Thomasson, A.L. 1999. Fiction and Metaphysics. Cambridge University
Press.
Thompson, H. S. 2007. URIs and Persistence: How long is forever? http://
www.ltg.ed.ac.uk/~ht/UKOLN_talk_20070405.html (May 31, 2010).
Thompson, H. S. 2012. An introduction to naming and reference on the
Web. http://www.ltg.ed.ac.uk/~ht/PhilWeb_2012/ (February 5, 2012).
Vuillemin, J. 1986. What are Philosophical Systems? Cambridge
University Press.
Zalta, E. N. 2003. Referring to fictional characters. Dialectica 57(2):243-
54.
The Creativity Machine Paradigm:
Withstanding the Argument from
Consciousness
Stephen L. Thaler
Imagination Engines, Inc.
Abstract
In Alan Turing’s landmark paper, “Computing Machinery and
Intelligence,” the famous cyberneticist takes the position that
machines will inevitably think, supplied adequate storage,
— APA Newsletter, Spring 2012, Volume 11, Number 2 —
— 20 —
processor speed, and an appropriate program. Herein we
propose the solution to the latter prerequisite for contemplative
machine intelligence, the required algorithm, illustrating how
it weathers the criticism well anticipated by Turing that a
computational system can never attain consciousness.
1. Introduction. In his 1950 article in Mind, entitled “Computing
Machinery and Intelligence,” Alan Turing anticipated nine
objections to his conjecture that machines would one day
think, and that they could succeed at the so-called “imitation
game.” The foremost of these objections, in my mind, was the
so-called “argument from consciousness” in which machines
are denied full contemplative status on the basis of their lack
of emotion, in particular the feelings they have about their own
thinking. Appropriately, Turing quotes Professor Jefferson’s
Lister Oration from 1949 to drive home the dissenting point of
view, “Not until a machine can write a sonnet or compose a
concerto because of thoughts and emotions felt, and not by the
chance fall of symbols, could we agree that machine equals
brain—that is, not only to write it but know that it had written it.
No mechanism could feel pleasure at its successes, grief when
its valves fuse, be warmed by flattery, be made miserable by
its mistakes, be charmed by sex, or depressed when it cannot
get what it wants.”
Recently, a new direction in artificial intelligence technology,
called the “Creativity Machine Paradigm,” allows the generation
of new ideas and plans of action without the “chance fall of
symbols,” accelerated toward its goals by what is tantamount
to the subjective pleasure or frustration felt by the human mind
as it originates seminal concepts. Whereas this connectionist
principle has not written poems in iambic pentameter, it has
proven itself capable of both generating and interpreting natural
language, to the extent of autonomously fomenting controversy
over its self-originated commentary (Hesman 2004). While
not generating a concerto, it has achieved the equivalent by
spontaneously authoring an album of original musical tunes
(Thaler 2007) that are capable of passing the equivalent of a
“musical Turing test,” after being mentored not by “if-then-else”
heuristics or tedious statistical studies, but by the detection of
the raw emotions on its audience’s face. In military projects,
battlefield robots have bootstrapped impressive tabula rasa
behaviors, spontaneously developing improvised reactions
to unexpected scenarios, and displaying socially conscious
gestures of cooperative planning and mutual protection within
a swarm (Hambling 2006). In all three of these examples, the
system was well aware of the consequences of its generated
concepts before unleashing them upon the world. With its
only “valves” being transistors, and its reproductive tendencies
limited to software-based object instantiation, I will argue
that it experiences the gamut of emotions to both its external
environment and its own imaginings, ranging from frustration,
to panic, to elation. These feelings then govern the generation,
acceptance, and savoring of its own ruminations.
2. Background. To properly relate the concept of a Creativity
Machine it is important to review several underlying building
blocks that contribute to the paradigm’s ability to achieve not
only thought, but also self-regulating meta-thought. These key
principles include the perceptron and what I have coined the
“imagitron.”
2.1 Perceptrons. To most readers, the more familiar component
of a Creativity Machine is the perceptron, a specialized neural
network that emulates the non-contemplative aspects of
cognition wherein raw numerical patterns, representing
both interoceptive and exteroceptive inputs to the brain, are
mapped to associated memories. Just as within neurobiology,
the creation of such mappings is achieved through the
adjustment of synaptic connection strengths, via simple
learning algorithms, as numerous exemplary input-output
pair patterns are applied to the network. Having attained such
mappings, two ver y important learning processes have taken
place within the perceptron: (1) distributed colonies of neurons
have synaptically bound themselves into token representations
of frequently encountered features within the body of input
training exemplars, and (2) additional synapses have acquired
strengths that reflect the intrinsic relationships between such
features in generating an associated pattern-based memory at
the net’s output layer.
Introspectively relating this learning process to human
cognition, the world observable to the brain is automatically
carved up into its dominant themes, consisting of repeating
entities and scenarios in the external world. As such themes
appear within the outer reality, the token representations
thereof, once again consisting of distributed colonies of
neurons, activate, thereafter driving the subsequent excitation
of associated memories within downstream neuron layers.
During such forward propagation of patterns, no contemplative
processes are at work. Instead, the net reflexively and
instinctively generates a stored memory in response to a
sensed pattern originating from the environment. Therefore,
the process emulates the brain’s inherent ability to generate
immediate and hopefully useful associations when the time-
intensive luxury of understanding is detrimental to the host
organism.
One skilled in both artificial neural networks and the
workings of the brain realizes that while the perceptron
epitomizes non-contemplative perception via pattern
association, neurobiological perception involves hierarchical
cascades of neural assemblies and not just a single, monolithic
neural network. Within these compound neural architectures,
an individual perceptron may activate into a particular memory.
A subsequent perceptron accepting the output memory of the
first will then activate into a related memory, and so forth and
so on. In this manner, multiple perceptrons are recruited into
associative chains, often terminating upon themselves to form
closed loops. The topology of such chains may be dynamic due
to their inclusion of specialized neurons capable of triggering the
secretion of synapse-altering agents (i.e., neurotransmitters and
neurohormones). Because of such weight plasticity, memory
linkages will not only be constantly rerouting themselves, but
the experiences stored therein will be deforming themselves
to various degrees. The net result, if you will, is that the co-
activating neural patterns will consist of a mixture of intact and
degraded experience.
To any given brain, such complex patterns of neural
activations will be idiosyncratic in that all neural modules
cumulatively habituate to one another, in what amounts to a
highly encrypted communications scheme. Such subjective
experience cannot be shared with other neural networks from
another brain, since these “outsider nets” do not possess the
hard earned encr yption key that has been attained through
cumulative, joint exposure of the resident nets to sensor y
patterns. In lieu of such joint training, we as humans employ
very slow and inefficient schemes such as symbolic language
to convey these jointly activating memories, the result being the
wholesale loss of information contributing to an overall picture
that falls short of the synaptic reality.
Even if supplied such synaptic detail, we would find that
interconnected memories are severely lacking in detail and
fidelity, with receiving networks filling in features as does
the visual cortex in supplying multiple draft guesses as to
information within the retinal blind spot (Dennett 1991). Due
to the accumulated guesswork within such transiently linked
neural modules, any semblance of reality degrades as in a
— Philosophy and Computers —
— 21 —
Realizing that the synaptic organization of the perceptron
implicitly contains the rules binding neurons into tokenized
representations of the external world, as well as the intrinsic
heuristics interrelating such features, variations upon such
connection weights are the only means by which to force the
perceptron turned imagitron to exit its absorbed conceptual
space and to generate other than the mapping it has gleaned
through training. Introducing such weight deviations, the
repeating features of the input space are transmogrified and
their interrelationships softened or broken. The overall result
is that the network fails to activate into its learned output
exemplars at its terminal layer. In effect, the net is then
generating false memories or confabulations as its synaptic
connections are continually perturbed (Thaler 1998).
In Figure 1, we present a general result for a multilayer
perceptron (MLP) based associative memory1 that has learned
a mapping and is then subjected to increasing levels of synaptic
perturbation, plotting the probability, Pmem, of generating an
intact output memory as the mean level of synaptic disturbance,
<∆w>, is increased. Typically as such mean perturbation level
rises the network predominantly outputs, to within a small
error, the training exemplars it has already been exposed
to, tantamount to the network’s memories (Thaler 1995b).
However, beyond a critical threshold of perturbation, near the
end of what is called the “regime of graceful degradation,”
(near <∆w>c) the network now begins to output slightly
defective memories or novel patterns that are mathematically
distinct from what the network has directly experienced (Thaler
1997a). Increasing the noise level even more, the hard-earned
connection weights become randomized, thereby destroying
the absorbed constraint relationships that capture the essence
of the learned conceptual space. As a result the network tends
to output nonsensical patterns.
Based upon the veracity and utility of output patterns
produced by the imagitron as mean synaptic noise levels
increase, I have identified three distinct regimes (Thaler 1996)
that are called out in Figure 1:
child’s game of telephone. Accordingly, the story contained
within such cascaded memories is much greater than their sum.
For this reason, I call such chained memories an “associative
gestalt.” Because of their intangibility, resistance to high-
level description, circularity, self-driven evolution, and their
subjective interpretation of an objective reality, I identify such
“associative gestalts” with emotions and feelings.
With no loss of generality, the monolithic perceptron
can likewise carry out the pattern association that many
generations of human beings regard simply as subjective
experience. Henceforth I will at times represent the complex
associative chains and loops as the flattened output pattern of
a single perceptron, that may potentially represent a sublime
recollection, past physical pleasure, or, if need be, a non-
descript buzzing sensation. To achieve these mappings, both
input and output patterns presented to the network by the
environment will need to somehow correlate spatially and/
or temporally as synapses adjust their strengths to learn the
association. Henceforth we would be lax in our language to
claim that the input and output patterns are truly related to
one another, when in reality all we can say with confidence
is that the patterns are associated in a strictly mathematical
“mapping” sense.
2.2 Imagitrons. If the synaptic connection weights of a trained
perceptron are subjected to time varying disturbances, two very
important things happen to make it a pattern generator rather
than the pattern associator: (1) Synaptic disturbances serve as
a succession of pseudo-inputs from the environment driving
the activation turnover of downstream processing units; and
(2) The same connective disruptions continually reshape the
attractor landscape of the network so as to create new features
therein. Because of such internal noise, the net’s activation
trajectory is over an unstable and dynamic attractor landscape
as it activates into patterns it has never before encountered
within its environment. Appropriately, I call such generative
perceptrons, imagitrons.
CM: Withstanding the Argument from Consciousness Monday, February 6, 2012
S. L. Thaler 4
Figure 1. Confabulation Generation within a Synaptically Perturbed Perceptron. Shown here is a
representative plot of the probability of activating a memory, Pmem versus increasing levels of mean
synaptic perturbation, <
w>, within a trained perceptron. The so-called “U regime” is characterized by
intact memory generation, and the “W regime” marked by unconstrained, nonsensical output patterns.
The narrow “V regime” near the critical point, <
w>c, produces novel patterns largely qualifying as
members of the learned conceptual space. The left inset depicts this transiently perturbed network’s
weights in red.2
Based upon the veracity and utility of output patterns produced by the imagitron as mean synaptic noise
levels increase, I have identified three distinct regimes (Thaler 1996) that are called out in Figure 1:
U-Mode - Generally, U represents an imagitron into which minimal noise has been introduced
(<w><<w>c), thus driving it to visit a series of rote memories that have been drawn from the
network’s previous training experience, its universe, if you will.
V-Mode - Imagitrons operating at the critical noise level, near <w>c, are depicted as V,
suggesting that they are producing virtual memories of potential things and scenarios that could
be part of the net’s external environment, but hitherto have not been directly experienced by it
through learning.
W-Mode - Finally, W denotes an imagitron driven by noise levels in excess of those injected in
the critical regime, (<w>><w>c). As a result, most of the constraint relationships
characteristic of the conceptual space have been destroyed, leading to the generation of
predominantly meaningless noise, in a manner reminiscent of the blind watchman allegory.
When enlisting an imagitron to search for solution patterns it should be apparent that the U-mode is only
useful when purposely selecting among the network’s finite recollections. On the other hand, W-mode
represents an imagitron sufficiently “battered” so as to dissolve the hard-earned constraints and thereby
generate an enormous search space littered predominantly with nonsensical patterns. It should make sense
that the intermediate V-regime, what has been called the “multi-stage” regime (Partridge and Rowe 1993)
in rule-based, computational creativity research, offers the best chance at producing a pattern that is novel
in comparison with the network’s memories, yet qualifying as a potential thing or action representative of
Figure 1. Confabulation Generation within a Synaptically Perturbed Perceptron. Shown here is a representative plot of the probability
of activating a memory, Pmem versus increasing levels of mean synaptic perturbation, <w>, within a trained perceptron. The so-called
“U regime” is characterized by intact memory generation, and the “W regime” marked by unconstrained, nonsensical output patterns.
The narrow “V regime” near the critical point, <w>c, produces novel patterns largely qualifying as members of the learned conceptual
space. The left inset depicts this transiently perturbed network’s weights in red.2
— APA Newsletter, Spring 2012, Volume 11, Number 2 —
— 22 —
U-Mode - Generally, U represents an imagitron into
which minimal noise has been introduced (<w>
< <w>c), thus driving it to visit a series of rote
memories that have been drawn from the network’s
previous training experience, its universe, if you will.
V-Mode - Imagitrons operating at the critical noise
level, near <w>c, are depicted as V, suggesting that
they are producing virtual memories of potential things
and scenarios that could be part of the net’s external
environment, but hitherto have not been directly
experienced by it through learning.
W-Mode - Finally, W denotes an imagitron driven
by noise levels in excess of those injected in the
critical regime, (<w> > <w>c). As a result, most
of the constraint relationships characteristic of the
conceptual space have been destroyed, leading to the
generation of predominantly meaningless noise, in a
manner reminiscent of the blind watchman allegory.
When enlisting an imagitron to search for solution patterns
it should be apparent that the U-mode is only useful when
purposely selecting among the network’s finite recollections.
On the other hand, W-mode represents an imagitron sufficiently
“battered” so as to dissolve the hard-earned constraints
and thereby generate an enormous search space littered
predominantly with nonsensical patterns. It should make
sense that the intermediate V-regime, what has been called
the “multi-stage” regime (Rowe and Partridge 1993) in rule-
based, computational creativity research, offers the best chance
at producing a pattern that is novel in comparison with the
network’s memories, yet qualifying as a potential thing or action
representative of the conceptual space learned by the net. In
mathematical terms, the V regime produces output patterns
that largely satisfy constraint relationships implicit within the
imagitron’s training patterns, thus qualifying them as potential
and novel members of the learned conceptual space.
I sometimes speak of the U, V, and W modes using
diagrams like that depicted in Figure 2, where the origin
represents a weight space solution for a trained perceptron,
here simplified to two weight dimensions. Any pattern of
synaptic perturbation to this perceptron may be represented
as the vectorial deviation of the connection weights from their
train-in values. Therefore, for a constant level of root-mean-
square (RMS) weight fluctuations, the succession of perturbed
vectors should randomly sweep out spherical hyper-surfaces
that project down to a circle in the 2-D weight space shown.
Below an RMS level corresponding to <w>c in Figure 1, the
synaptic perturbation vector, randomly moves within a circular
domain representing the U regime wherein the perturbations
are seeding the formation of rote memories. Approaching the
<w>c “membrane,” in the V regime, the weight perturbation
nucleate confabulatory patterns that are slight twists upon
the net’s absorbed memories. Finally, as perturbation vectors
extend into the W-regime the synaptic tumult generates a
stream of largely nonsensical, unconstrained patterns.
With imagitron function described in this geometrical
fashion, the escape from the conceptual space stored within
an imagitron, to produce new notions distinguishable from
direct experience, is literally represented by the weight change
vector, w, departing from the U domain and penetrating into
the thin V shell. Just before this U-V boundary traversal, the net
is generating intact memories at an optimal rate, what might be
likened to frenzy. With the slightest “thump” to mean synaptic
perturbation level, the network may catastrophically transition
to confabulation generation wherein notions generalized from,
but distinct from those of the U domain are formed.
In producing biological intelligence, the brain’s ability to
exit a conceptual space by simply graduating the RMS synaptic
noise level seems advantageous. Effectively, brains can live
on a cusp, so to speak, and in response to environmental
stress, bathe their neurobiology in slightly increased synaptic
perturbation levels so as to drive them through a bifurcation
separating mundane and improvised thought. It is at
such times that there is the most need for new and viable
strategies to preserve the host organism.
That the fidelity of a neural network’s activation patterns
to its learned reality is most sensitive to synaptic disturbances
should make sense: Even within artificial perceptrons, the
number of connection weights scale roughly with the square
of the processing units therein offering the highest capture
cross section for randomly distributed disordering effects. By
far, the most numerous “trip points” for signal transmission in
the brain are the chemical synapses, outnumbering neurons
10,000:1. With communication through these neuron gaps
achieved with minimally small packets of neurotransmitter
molecules, it would seem that unintelligent evolutionary
forces could easily discover the selective advantage of
secreting ever so slightly increased levels of perturbations
(i.e., diffusing chemical species) so as to think that which
had not previously been thought.
2.3 Perceptron-Imagitron Assemblies (Creativity
Machines). When a noise-driven, pattern-generation
network, an imagitron, is coupled with a pattern-recognition
network such as a perceptron, those confabulatory outputs
generated by the former net may be either objectively or
subjectively evaluated by the latter so as to selectively filter
for those true or false memories offering utility or value. Any
such numerical figure of merit generated by the perceptron
may be exploited to modulate noise injected into the
imagitron’s synaptic system. The permanent or transient
combination of at least two such neural assemblies is
called a “Creativity Machine” and the principle, applicable
CM: Withstanding the Argument from Consciousness Monday, February 6, 2012
S. L. Thaler 5
the conceptual space learned by the net. In mathematical terms, the V regime produces output patterns
that largely satisfy constraint relationships implicit within the imagitron’s training patterns, thus
qualifying them as potential and novel members of the learned conceptual space.
Figure 2. A Synaptically Perturbed Perceptron’s Exit from Its Learned Conceptual Space. Illustrated
here is a two-dimensional slice from weight space of a perceptron, depicting its weight solution, O. Other
neighboring solutions are also shown. Progressively increasing the mean synaptic perturbation level
allows the network output patterns to exit the original conceptual space, to produce potentially useful
novel patterns such as those encountered at V. Increased perturbation levels generated totally
unconstrained patterns represented by W. (Other potential weight space solutions, O’, O’’, and O’’’, are
shown projected from a third weight dimension.)
I sometimes speak of the U, V, and W modes using diagrams like that depicted in Figure 2, where the
origin represents a weight space solution for a trained perceptron, here simplified to two weight
dimensions. Any pattern of synaptic perturbation to this perceptron may be represented as the vectorial
deviation of the connection weights from their train-in values. Therefore, for a constant level of root-
mean-square (RMS) weight fluctuations, the succession of perturbed vectors should randomly sweep out
spherical hyper-surfaces that project down to a circle in the 2-D weight space shown. Below an RMS
level corresponding to <w>c in Figure 1, the synaptic perturbation vector, randomly moves within a
circular domain representing the U regime wherein the perturbations are seeding the formation of rote
memories. Approaching the <w>c “membrane,” in the V regime, the weight perturbation nucleate
confabulatory patterns that are slight twists upon the net’s absorbed memories. Finally, as perturbation
vectors extend into the W-regime the synaptic tumult generates a stream of largely nonsensical,
unconstrained patterns.
With imagitron function described in this geometrical fashion, the escape from the conceptual space
stored within an imagitron, to produce new notions distinguishable from direct experience, is literally
represented by the weight change vector, w, departing from the U domain and penetrating into the thin
V shell. Just before this U-V boundary traversal, the net is generating intact memories at an optimal rate,
what might be likened to frenzy. With the slightest “thump” to mean synaptic perturbation level, the
network may catastrophically transition to confabulation generation wherein notions generalized from,
but distinct from those of the U domain are formed.
Figure 2. A Synaptically Perturbed Perceptron’s Exit from Its Learned
Conceptual Space. Illustrated here is a two-dimensional slice from weight
space of a perceptron, depicting its weight solution, O. Other neighboring
solutions are also shown. Progressively increasing the mean synaptic
perturbation level allows the network output patterns to exit the original
conceptual space, to produce potentially useful novel patterns such as
those encountered at V. Increased perturbation levels generated totally
unconstrained patterns represented by W. (Other potential weight space
solutions, O´, O´´, and O´´´, are shown projected from a third weight
dimension.)
— Philosophy and Computers —
— 23 —
to any computational platform, the “Creativity Machine
Paradigm.” Within the patent literature both the architecture
and the paradigm are known as “Device for the Autonomous
Generation of Useful Information” (Thaler 1997b) or “Device for
the Autonomous Bootstrapping of Useful Information” (Thaler
2008). These two generations of inventive neural systems are
therefore regarded as “DAGUIs” and “DABUIs,” respectively.
If we construct a specialized DAGUI such that its perceptron
generates a numerically based figure of merit proportional to
the rate at which it is witnessing satisfactor y pattern solutions
from the imagitron, the networks equilibrate, with synaptic
noise level automatically moving into the V regime (Thaler
1997c). This equilibrium arises due to the inherent insufficiency
of novel, problem-solving patterns within the U domain and the
sparseness of coherent patterns in the W regime.
Equating the imagitron with the brain’s neo-cortex, I
conjecture that the brain resides largely within the vicinity of
the V regime of synaptic perturbation, essentially riding a cusp
separating rote and novel pattern generation. As noted above,
brain modality can thereby shift catastrophically from mundane
stream of consciousness to more inventive ideation purely
through the adjustment of the statistical average of synaptic
perturbation, <w>. In neurobiology and the interconnected
endocrine system, environmental stress can result in the
secretion of appropriate neurotransmitters to alter long term
potentiation, allowing us to consider that which has not been
directly experienced or pondered before. In other words, the
ability to rapidly bifurcate into false memory generation is
favored by Darwin so as to allow effective strategy generation
under traumatic, life-threatening circumstances. What we
would consider convergence toward a viable solution would be
marked by the subsidence of stress-related neurotransmitters
such as adrenaline, as they are swamped out by less perturbative
molecular agents such as serotonin and dopamine.
Depending upon the synaptic noise level within the
imagitron, this neural cascade may interact with its environment
in three fundamentally different ways. Referring to Figure 3,
imagitrons and perceptrons may operate at very low noise
levels, making them most attuned to the environment. The
imagitron may serve as an associative memory, comparing any
input environmental pattern, E, against the memories stored
within it. Any patterns deemed novel through this comparison
process (via reconstruction error, δ, Thaler 2000) may be
selectively passed to the perceptron to access the value, utility,
or threat thereof.
As mean synaptic noise level is raised into the U and V
regime, the imagitron may either straightforwardly or creatively
interpret the input stimulus, E, by activating into several rival
memories or confabulations that are alternating due to synaptic
disturbances. A context-aware perceptron (connections
to environment not shown) may then maintain such noise
so as to juggle these competing E-interpretations until the
perceptron’s “understanding” of the environment pattern is
consistent with the overarching circumstances. At that time,
the perceptron stage modulates the synaptic noise toward
zero, effectively freezing in the environmental pattern’s most
favored interpretation.
Given sufficiently high levels of synaptic fluctuations, the
imagitron is vastly more sensitive to internal disturbances than
to the succession of environmental patterns, E, appearing at the
network’s inputs. It is within these V and sometimes W mode
imagitrons that the equivalent of “eyes-shut” discovery takes
place, with ideas synthesized from the combination of either
intact or degraded token representations of world features.
Obviously, Creativity Machines may become much more
complex than just the canonical, two network system described
above. To facilitate their description and function, whether
Figure 3. Changing Function of a Creativity Machine with Increasing Synaptic Noise Levels. As the perceptron injects
increasing levels of synaptic noise (red weights) the system transitions from recognizing environmental patterns of interest,
to inventive interpretation of things and events in the environment. With even more noise, the network becomes “attention
deficit,” freely imagining based upon a mixture of stored memories and derivative confabulations.
CM: Withstanding the Argument from Consciousness Monday, February 6, 2012
S. L. Thaler 7
Figure 3. Changing Function of a Creativity Machine with Increasing Synaptic Noise Levels. As the
perceptron injects increasing levels of synaptic noise (red weights) the system transitions from
recognizing environmental patterns of interest, to inventive interpretation of things and events in the
environment. With even more noise, the network becomes “attention deficit,” freely imagining based
upon a mixture of stored memories and derivative confabulations.
Depending upon the synaptic noise level within the imagitron, this neural cascade may interact with its
environment in three fundamentally different ways. Referring to Figure 3, imagitrons and perceptrons
may operate at very low noise levels, making them most attuned to the environment. The imagitron may
serve as an associative memory, comparing any input environmental pattern, E, against the memories
stored within it. Any patterns deemed novel through this comparison process (via reconstruction error, ,
Thaler 2000) may be selectively passed to the perceptron to access the value, utility, or threat thereof.
As mean synaptic noise level is raised into the U and V regime, the imagitron may either
straightforwardly or creatively interpret the input stimulus, E, by activating into several rival memories or
confabulations that are alternating due to synaptic disturbances. A context-aware perceptron (connections
to environment not shown) may then maintain such noise so as to juggle these competing E-
interpretations until the perceptron’s “understanding” of the environment pattern is consistent with the
overarching circumstances. At that time, the perceptron stage modulates the synaptic noise toward zero,
effectively freezing in the environmental pattern’s most favored interpretation.
Given sufficiently high levels of synaptic fluctuations, the imagitron is vastly more sensitive to internal
disturbances than to the succession of environmental patterns, E, appearing at the network’s inputs. It is
within these V and sometimes W mode imagitrons that the equivalent of “eyes-shut” discovery takes
place, with ideas synthesized from the combination of either intact or degraded token representations of
world features.
— APA Newsletter, Spring 2012, Volume 11, Number 2 —
— 24 —
synthetic or biological, I have used a symbolism of my own
making (Thaler 1996) that represents observing perceptrons by
the letter “O.” For instance, the cognitive feat of disambiguating
some environmental pattern is describable as an E-U=O
process and the “eyes shut” brand of creativity is denoted
as V=O, with the equal sign conveying the reciprocal dialog
between the V and O neural agencies.
More ambitious forms of discovery involving the identification
of multiple imagitron assemblies simultaneously activating into
juxtapositional concepts may be denoted as “UiVj=O” discovery3
wherein any number of memories (Ui) and confabulations (Vj)
may link into new combinations of tokenized entities or actions
that are all collectively “judged and nudged” via a perceptron, O.
Such juxtapositional discoveries can span the range of cognitive
tasks that include the pragmatic combination, for example, of
box, wheel, and axle memories to produce the epiphanal pattern
of a wheeled vehicle, or the association of a deductive conclusion
from combined predicates.
In demonstrating that a Creativity Machine can have
thoughts about its thoughts, the O stage is critical because
it is responsible for not only recognizing useful memories or
confabulatory patterns, but also elevating synaptic perturbation
until it is satisfied with the imagitron’s output. Typically, the
activation level of one or more output neurons, representing
some figure of merit, can modulate the noise levels injected into
the imagitron. In the simplest of cases, the perceptron could
conceivably incorporate just one output neuron, continuously
activating from a value of 0, symbolizing satisfaction, to an
excitation of 1, representing utter discontent. That single output
could in turn be tied with the effectiveness of any past ideas
upon the environment, as learned through cumulative training.
Whereas such a simple perceptron would not lead to a
complex chain of associations I have spoken of as a gestalt,
it does produce a parade of memories and potential ideas in
what might be considered to humans as frenzy. Having found
a useful solution pattern, the perceptron could utilize its near
zero output to modulate the imagitron’s noise proportionately,
thereby latching onto the currently activating pattern in a
process tantamount to satisfaction and perhaps even ecstasy.
More complex Creativity Machine designs are capable of
producing the complex associative gestalts that “tag” neural
assemblies capable of taking charge of the imagitron’s injected
noise level. As these specialized networks squeeze off the
equivalent of adrenalin or serotonin, they are simultaneously
activating into an evolving chain of associations. That these are
not the kinds of associations humans experience is irrelevant.
They are pattern-based associations none the less.
Recent improvements to the fundamental Creativity
Machine architecture involve both perceptrons and imagitrons
that are capable of adaptation (Thaler 2008), as symbolized
through an asterisk. So, in the example of the passive V=O
architecture, the V*=O* variation allows the perceptron stage
to trigger reinforcement learning of confabulations deemed
promising through the perceptron’s opinion formation
process. In this way, novel patterns deemed useful through the
perceptron’s associative gestalt are reinforced as memories
within the imagitron. Simultaneously, the mapping between
imagitron output patterns and the perceptron’s predicted figure
of merit is likewise perfected through additional training cycles.
Implicit in this architecture are actuators fed by imagitrons to
effect the environment, and sensors feeding perceptron outputs
to assess the effect of such concepts or strategies upon the
environment or the neural system itself.
The operation of this newest form of Creativity Machine
(DABUI) should make introspective sense: In one instant
we may have a brilliant idea, but in the next the revelation
becomes only a memory. From a dynamical perspective, a
perceptron may “take a liking” to an imagitron’s activation
state represented by a mountain top in the attractor landscape,
thereafter transforming this same feature into a deep attractor
basin through reinforcement learning. Subsequently, such
new attr actors, representing advantageous concepts or
strategies, may be further mutated and merged into even
better ideas through continuing cycles of synaptic perturbation
and reinforcement learning. The overall effect is that DABUI
operation, although initially stochastically seeded, becomes
progressively more systematic as the perceptron intelligently
triggers the storage and recombination of memories within a
dialog of ever-growing sophistication.
In the latest and most ambitious DABUIs, the core
perceptron-imagitron pair is able to instantiate additional neural
modules that are gradually annexed to create vast brain-like
pathways. In this application of the paradigm, confabulatory
patterns represent candidate dimensioning and positioning
strategies for these auxiliar y nets, with the perceptron stages
sensing the “wisdom” of the tentative architecture based upon
the performance of other self-recommended architectures.
Such performance may be gained through human mentorship
or through the system’s own self-defined objectives.
All in all, DABUIs represent a vastly generalized and even
more rigorous and quantitative version of Baars’ (1997) Global
Workspace Theory (GWT) in which telephone numbers may
be rehearsed in U-mode imagitron function. Speech may
be formulated or visual art conceived at V levels of synaptic
noise. Within the “theater of mind” originating such ideation,
imagitrons serve as stage actors and perceptrons, the audience.
Aside from the vast utility and power in modeling GWT-style
cognition, which I and others (Boltuc 2007, 2009) differentiate
from consciousness, I point out a subtle process taking place
within the DABUI that may have a significant consequence
upon the subsequent discussion. As the former net nucleates
a candidate concept or strategy upon injected synaptic noise,
both nets simultaneously observe both the outgoing stimulation
of and the incoming response from the environment. The
imagitron component preferentially learns those stimulus
patterns whose environmental response satisfies the perceptron
while the perceptron stage perfects its mapping between
said stimulus and response. In the process, a language is
automatically built up understandable only to the networks
involved in what is tantamount to a first-person perspective,
involving otherwise indecipherable activation patterns that the
philosophy of consciousness regards as qualia.
2.4 Creativity Machines and Consciousness. Heretofore I have
mostly spoken of the Creativity Machine primarily in a pragmatic
sense, as a simple and canonical neural architecture for
invention and discovery, but I envision it as a model of so much
more, namely, consciousness itself and how to implement
machines that have thoughts about their thoughts.
Peering into the brain as scientists engaged in the process of
free inquiry, all we see are evolving patterns of neural activation.
However, quer ying the human test subject undergoing the
functional brain scan, one hears a very subjective account of
the overall conscious experience dominated by two very salient
features: (1) the inexorable parade of memories, ideas, and
sensations that seem to originate from nothingness, a stream
of consciousness, so to speak, and (2) a reaction to that parade
via emotions and what many have called the intrinsic “buzz”
of consciousness that we associate with the hard problem
(Chalmers 1995). The primary question then becomes one of
how to resolve these diametrically opposed perspectives.
Just for a moment, allow me to pessimistically conjecture
that consciousness isn’t what it’s hyped to be and that
— Philosophy and Computers —
— 25 —
intrinsically, it is just the evolving pattern of neural activations.
If that’s all there really is, then some creative process is
required to relate a mechanism to what most of the human
race considers mystical, profound, and inimitable. As I have
already demonstrated, the Creativity Machine Paradigm is
the fundamental neural architecture for achieving this end,
especially when the apparatus involved, the brain, functionally
consists of only neurons, synaptic interconnects, and a form of
long range chemical connectionism represented, for example,
by the endocrine system.
Let us assume that the Creativity Machine is at the
heart of consciousness, not the kind related to attentional
awareness, but to our inner mental experience and the so-
called “subjective feel.” After all, one may place a test subject
into a sensory deprivation chamber, blocking visual or auditory
input, allowing more visceral sensations such as warmth and
wetness to habituate into nothingness. At this point, the stream
of thoughts and the reactive associations are modeled by the
inattentive Creativity Machine appearing in the right panel of
Figure 3, wherein the turnover of memories and confabulations
is primarily governed by the random noise fluctuations
introduced into the imagitron. The succession of thoughts
(a.k.a., thinking) trigger output patterns within the perceptron
that are tantamount to the associative gestalts we have about
such meandering thoughts.
Figure 4 summarizes what at this point is still a hypothetical
model of how consciousness can arise in the brain via Creativity
Machine Paradigm. Ubiquitous, energetic fluctuations (noise)
drive a succession of memories and confabulations tantamount
to thought, with absolutely no qualification that they be accurate
or productive in nature. By virtue of connections to another
neural assembly, associated patterns form, chain, and often
loop in response to the evolution of faux things and events in
the former neural assembly. Imagery of scenarios in the former
assembly may evoke a chain of associated memories in the
latter, all of which have been formed via the known sensory
channels. That is why when we have feelings, we express
them as though they are like something else. Such analogy
chains form up, decay into others, and that is essentially the
feelings we have of any thought. It is certainly true that there
is no particular perceptron in the brain that has “good” and
“bad” output nodes. Nevertheless, when we have an idea that
is favorable to our being or livelihood, the associative chains
formed include pleasant experience, including virtual, physical
sensations. Having thoughts related to threat or adversity, the
associative gestalt may include memories of physical pain
that may trigger stress related neurotransmitters that keep the
imagitron stage churning out progressively twisted notions until
the perceptrons are satisfied.
A salient aspect of Figure 4 is that consciousness is a loop
from which the only escape is death or brain injury. There is
no monitoring mechanism therein that can allow the brain to
understand itself at the level of its synaptic organization and
the momentary disturbances to such connections. Of nearly
equal saliency is the fact that ever ything about this process is
for all intents and purposes, bogus: The upper, imagitron stage
is harnessing energetic disturbances to create a succession of
entities and scenarios, none of which is real. Similarly, the lower,
perceptron stage is producing likewise counterfeit impressions
of this virtual reality, through associative chains and loops
connecting memories and confabulations drawn from prior
sensory experience. In effect, the entire process is an illusion,
but the overall advantage is very real, namely, to preserve the
life of the host organism and to provide survival advantage over
other organisms.
Though the process may be an illusion, it may operate in a
wealth of modalities that represents all aspects of inner mental
life (Figure 5), again tied to one essential system feature, the
mean synaptic fluctuation, <w> within the imagitron stages.
For instance, in normal waking consciousness, imagitron
assemblies and perceptrons are bathed in minimal noise,
allowing them to lucidly detect anomalies in the environment
(see Figure 3, left panel) as well as opportunities and threats
therein. In daydreaming, heightened noise levels, at least
within the cortical imagitrons, lead to attention deficit as
internal activation turnover dominates over activations seeded
by external events. In the resulting reverie, the noise level is
sufficient to produce confabulator y entities and scenarios
representing potential, alternative realities.
Effectively cut off from sensory input and the mean level
of synaptic perturbation increased, the Creativity Machine
CM: Withstanding the Argument from Consciousness Monday, February 6, 2012
S. L. Thaler 10
Let us assume that the Creativity Machine is at the heart of consciousness, not the kind related to
attentional awareness, but to our inner mental experience and the so-called “subjective feel.” After all,
one may place a test subject into a sensory deprivation chamber, blocking visual or auditory input,
allowing more visceral sensations such as warmth and wetness to habituate into nothingness. At this
point, the stream of thoughts and the reactive associations are modeled by the inattentive Creativity
Machine appearing in the right panel of Figure 3, wherein the turnover of memories and confabulations is
primarily governed by the random noise fluctuations introduced into the imagitron. The succession of
thoughts (a.k.a., thinking) trigger output patterns within the perceptron that are tantamount to the
associative gestalts we have about such meandering thoughts.
Figure 4 summarizes what at this point is still a hypothetical model of how consciousness can arise in the
brain via Creativity Machine Paradigm. Ubiquitous, energetic fluctuations (noise) drive a succession of
memories and confabulations tantamount to thought, with absolutely no qualification that they be accurate
or productive in nature. By virtue of connections to another neural assembly, associated patterns form,
chain, and often loop in response to the evolution of faux things and events in the former neural assembly.
Imagery of scenarios in the former assembly may evoke a chain of associated memories in the latter, all
of which have been formed via the known sensory channels. That is why when we have feelings, we
express them as though they are like something else. Such analogy chains form up, decay into others, and
that is essentially the feelings we have of any thought. It is certainly true that there is no particular
perceptron in the brain that has “good” and “bad” output nodes. Nevertheless, when we have an idea that
is favorable to our being or livelihood, the associative chains formed include pleasant experience,
including virtual, physical sensations. Having thoughts related to threat or adversity, the associative
gestalt may include memories of physical pain that may trigger stress related neurotransmitters that keep
the imagitron stage churning out progressively twisted notions until the perceptrons are satisfied.
Figure 4. Creativity Machine Based Model of Consciousness. A noise-driven stream of tokenized world
features activate within imagitrons, emulating so-called stream of consciousness or thought. Associated
thoughts, known as feelings, nucleate in response to the imagitrons’ stream of consciousness. They
consist of chains and loops of memories gleaned from the sensory channels related to sights, sounds, and
sensations such as physical pain and pleasure.
A salient aspect of Figure 4 is that consciousness is a loop from which the only escape is death or brain
injury. There is no monitoring mechanism therein that can allow the brain to understand itself at the level
of its synaptic organization and the momentary disturbances to such connections. Of nearly equal saliency
is the fact that everything about this process is for all intents and purposes, bogus: The upper, imagitron
Figure 4. Creativity Machine Based Model of Consciousness. A noise-
driven stream of tokenized world features activate within imagitrons,
emulating so-called stream of consciousness or thought. Associated
thoughts, known as feelings, nucleate in response to the imagitrons’
stream of consciousness. They consist of chains and loops of memories
gleaned from the sensory channels related to sights, sounds, and
sensations such as physical pain and pleasure.
CM: Withstanding the Argument from Consciousness Monday, February 6, 2012
S. L. Thaler 11
stage is harnessing energetic disturbances to create a succession of entities and scenarios, none of which
is real. Similarly, the lower, perceptron stage is producing likewise counterfeit impressions of this virtual
reality, through associative chains and loops connecting memories and confabulations drawn from prior
sensory experience. In effect, the entire process is an illusion, but the overall advantage is very real,
namely, to preserve the life of the host organism and to provide survival advantage over other organisms.
Though the process may be an illusion, it may operate in a wealth of modalities that represents all aspects
of inner mental life (Figure 5), again tied to one essential system feature, the mean synaptic fluctuation,
<w> within the imagitron stages.
For instance, in normal waking consciousness, imagitron assemblies and perceptrons are bathed in
minimal noise, allowing them to lucidly detect anomalies in the environment (see Figure 3, left panel) as
well as opportunities and threats therein. In daydreaming, heightened noise levels, at least within the
cortical imagitrons, lead to attention deficit as internal activation turnover dominates over activations
seeded by external events. In the resulting reverie, the noise level is sufficient to produce confabulatory
entities and scenarios representing potential, alternative realities.
Effectively cut off from sensory input and the mean level of synaptic perturbation increased, the
Creativity Machine architecture can dream. Such synaptic fluctuations are likely essential to the
transmogrification of entities and the intrinsic and nonsensical discontinuities within reported dream
sequences. So, whereas ponto-geniculo-occipital (PGO) waves originating from the diencephalon
(Hobson 1993) may seed the image in visual cortex of a tiger charging us, the resulting adrenaline rush
can suddenly transform the big cat into a dove.
Within trauma or drug-induced hallucination, both imagitron and perceptron stages are subjected to
intense synaptic fluctuations, leading to not only transmogrify of absorbed features, but also
misinterpretation by perceptrons of noise-seeded entities and scenarios simulated within imagitrons.
Figure 5. The Single Parameter underlying the Full Gamut of Conscious Experience, Synaptic
Perturbation Level (from Thaler 1997c). Here “network perturbation” includes all synaptic and circuit-
equivalent perturbations within neurons. However, because of the preponderance of connections over
processing units, most disturbances are expected to be synaptic in nature.
Consciousness
Day Dreaming
Dreaming
Network Perturbation
Hallucination
Near-Death
Experience
Afterlife ?
Spectrum of
Consciousness
Figure 5. The Single Parameter underlying the Full
Gamut of Conscious Experience, Synaptic Perturbation
Level (from Thaler 1997c). Here “network perturbation”
includes all synaptic and circuit-equivalent perturbations
within neurons. However, because of the preponderance
of connections over processing units, most disturbances
are expected to be synaptic in nature.
— APA Newsletter, Spring 2012, Volume 11, Number 2 —
— 26 —
architecture can dream. Such synaptic fluctuations are
likely essential to the transmogrification of entities and the
intrinsic and nonsensical discontinuities within reported
dream sequences. So, whereas ponto-geniculo-occipital
(PGO) waves originating from the diencephalon (Hobson
1993) may seed the image in visual cortex of a tiger charging
us, the resulting adrenaline rush can suddenly transform the
big cat into a dove.
Within trauma or drug-induced hallucination, both
imagitron and perceptron stages are subjected to intense
synaptic fluctuations, leading to not only transmogrify of
absorbed features, but also misinterpretation by perceptrons
of noise-seeded entities and scenarios simulated within
imagitrons.
Finally, within near-death experiences (NDE), it is plausible
to assume that the entire gamut of noise levels is visited,
beginning with stress-induced neurotransmitter release
that overwhelms the sensory channels with an internally
generated succession of memories tantamount to life review.
Thereafter, cell apoptosis effectively nullifies synapses in an
irreversible form of perturbation, wherein memories and then
confabulations nucleate upon patterns of what appear to the
surviving portions of the network as resting state (i.e., zeroed)
neurons (Thaler 1995). It is my suspicion that: (1) perceptron
modules dedicated to distinguishing reality from mental
imagery become less adept at doing so, and (2) that other
perceptrons, sensing a growing cascade of virtual events,
mistakenly perceive that they are experiencing eternity. All then
fades to black with a torrent of illusion, a fitting finale for a life
of cognition based upon the same (Thaler 1993, 1995, 2010) in
what may be described as a virtual brand of afterlife denoted
with a question mark, “?”, in Figure 5.
All aspects and life stages of human cognition can be
imitated using the fundamental UiVj=Ok architecture wherein
multiple imagitrons, in both U and V modes, are under
supervisory control by many perceptrons, such governance
being exercised through average synaptic perturbation level.
Throughout all these conscious modalities, imagitrons and
perceptrons are mutually learning from one another to create
a private and evolving language exercised between them that I
identify as the first-person, subjective experience at the core of
so-called “h-consciousness” (Boltuc 2011). The same adaptive
encryption scheme may be achieved in machines based upon
perceptron-imagitron ensembles, clearing the way for the
engineering of machine consciousness.
In demonstrating the equivalence between human and
machine intelligence, Turing relied upon gedanken experiments
in which machines were remotely interrogated via natural
language. To this great visionary, imitation of human behavior
was sufficient to demonstrate equivalence. Currently we
need not bother with the exchange of words to appraise the
consciousness of machine intelligence muted by design.
Instead, we may watch and compare the operation of both
a Creativity Machine and a brain, side by side, with DABUIs
monitored through graphical user interfaces and brains
observed via the latest functional brain scan techniques. Within
each of these systems we observe an evolution of activation
patterns with one pattern ostensibly triggering the next. With
causality smeared through this inherently cyclic process
reminiscent of Figure 4, it should make perfect sense that ideas
and feelings about such ideas become one and the same, simply
an endless chain of patterns spawning other such patterns.
Now, through thirty-seven years cumulative experience
with both the Creativity Machine and “easy chair neurobiology,”
I feel that we may now emulate all modalities and life-cycle
aspects of this ostensibly complex and conscious computing
scheme through the adjustment of just one simple parameter,
the mean synaptic perturbation.
3.0 Dealing with the Other Objections. Having proposed a
neural architecture that may implement the core phenomena
of consciousness, it seems that the other objections to the very
notion of thinking machines fall into place:
3.1 The Theological Objection. Like Turing, I am not impressed
with the theological position that thinking is a function of man’s
immortal soul. However, in contrast to Turing’s view, it is the
Creativity Machine, and not generic AI, that is effectively a
“mansion” for what many perceive as an immaterial spirit integral
to the brain (Thaler 2010). Depending upon the experience
of one’s perceptrons, the system occupant can be that which
defies definition such as supernatural entities, or, as in my case,
a statistically definable average of energetic fluctuations among
the synapses and neurons of a neural network.
When I was very young I entertained the former concept.
Later in life my interpretation toggled to the latter view point,
with my perceptrons appropriately biased through many
cumulative experiments in synthetic psychology.
3.2 The “Head in the Sand” Objection. Turing accurately
predicted one of Hollywood’s principle money-making themes,
that the consequences of machines thinking would be too
dreadful (i.e., “Terminator” and “The Matrix”). Whereas these
are theatrical scenarios involving human extermination or
enslavement, there are less severe possibilities in store for
humanity involving the mere intellectual humiliation of the
species. In this vein Turing makes an extraordinarily perceptive
observation that this objection would “likely be quite strong in
intellectual people, since the value of the power of thinking
more highly than others, and are more inclined to base their
belief in the superiority of Man on this power.”
I resonate with Turing’s observation on a daily basis
wherein I interact with very knowledgeable individuals who
are specialists within various problem domains. All but a few
are nonplussed by my ability to rapidly absorb their chosen
area of expertise into brainstorming assemblies of imagitrons
and perceptrons to solve the problems they themselves have
deemed top priority. Often denial and rejection, rather than
glowing acceptance, is the result as rationales against the
Creativity Machine methodology are stimulated via adrenaline
rush. Later, through patient inquiry, I often discover the revulsion
caused by such a simple model of human ingenuity. Even
more intense emotions erupt with their own revelation that
their ver y consciousness may be reduced to that of a neural
net bombarded by noise to create a stream of consciousness
as another net develops an attitude thereof.
Looking into the future, I see this objection continuing,
with humanity producing more reasons why such thinking
machines, most notably Creativity Machines, aren’t really
thinking. Ironically, though, they will be harnessing Creativity
Machine Paradigm within their own brains to generate such
oppositional sentiments.
3.3 The Mathematical Objection. Citing Gödel’s incompleteness
theorems, Turing correctly predicts that many would reject
machine intelligence based upon its inherent limitations,
namely, the generation of statements by a logical machine
whose veracity could not always be verified by the same
closed set of rules by which said machine operates. He quickly
dismisses this objection based upon the observation that human
intellect likewise has its limitations and that oftentimes we may
know that a notion is true, but are incapable of analytically
proving so. Under pressure to seek such proof, we must
creatively transcend the rules or principles exploited in idea
synthesis and essentially find validation via another logical
system, either discrete or fuzzy.
— Philosophy and Computers —
— 27 —
I would have to say that both the mind and the Creativity
Machine share the same pathology wherein pattern generation
can outpace pattern analysis. In effect, perceptrons may
recognize the effectiveness or validity of a confabulatory
pattern but, because of their non-contemplative function, can
only intuit such utility. It is only after skeletonization of such
perceptrons to comprehend the logic captured therein that the
underlying logical schema are revealed, at least to humans or
some externalized neural assembly.
In spite of not possessing such an onboard explanation
facility, I would claim that the cognitive weakness of a Creativity
Machine is also its strength, an imagitron’s ability to err toward
creative possibilities harnessing unintelligent noise, while
monitoring neural nets instinctively select the best of these
candidate notions. My suspicion is that this is the initial stage
of great ideas and that through multiple drafts (Dennett 1997)
the formal logic, mathematical symbolism, and explanatory
narrative become just the icing on the cake.
3.4 Arguments from Various Disabilities. …but you will never
be able to make one do X” is another objection, intimating that
a machine must possess the diversity of behaviors typical of a
human. Whereas Turing points out that in his time, most would
use logical induction to infer that narrowly focused machines
of his time could not attain the flexibility characteristic of the
human brain.
I, on the other hand would claim that both the cognitive
and conscious aspects of the brain have been intensively,
rather than extensively captured via Creativity Machine. That is
to say that smaller implementations of the paradigm recreate
narrowly focused cognition and a consciousness less rich than
that allowed by the human experience. Simple scaling of the
paradigm, adding a sensor suite far more extensive and capable
than the human sensoria, and an actuator ensemble more adept
than human hands, fingers, and feet, and we are now in the
regime that should genuinely concern the “head in the sand”
faction who might then themselves be regarded as disabled.
3.5 Lady Lovelace’s Objection. In effect, the Creativity Machine
is the epitome of generative artificial intelligence, perhaps
forming the ultimate response to Lady Lovelace’s Objection
that state machines like Babbage’s Analytical Engine” are
incapable of originating any ideas on their own, or “taking
us by surprise,” as Turing himself semantically fine-tuned the
Lovelace’s critique.
Certainly, the Creativity Machine has produced concepts
that have taken many by surprise, beginning with the generation
of natural language, wherein a perceptron-imagitron pair
exposed to sundry Christmas carols generated the controversial
lyric (at the granularity of letters and words), “In the end all
men go to good earth in one eternal silent night.” Sales figures
serve as testament to the paradigm shift product designs
formulated by the architecture. Many have marveled at the
ability of totally untrained neural models interconnected as
perceptron-imagitron teams to develop totally unanticipated
and sometimes unfathomable robotic behaviors to deal with
newly arising scenarios on the battlefield or the factory floor.
Then again, critics have charged that the concepts
generated by the Paradigm aren’t that powerful, and its artistic
creations not that moving. But then again, isn’t that the case
for any human originating within any conceptual space,
surrounded by critics with all manner of perceptual biases and
hidden agendas? Are we to then claim that such brains are not
capable of thought?
In taking a strictly analytical view of the model of seminal
cognition offered by the Creativity Machine, what really counts
is that the monitoring perceptrons are taken by surprise with
confabulatory outputs they have never before experienced,
sometimes associating utility or value to such false memories.
Anticipating that human brains are Creativity Machine based
then such perceptron-imagitron assemblies operative in the low
noise regime may first sense novelty to these freshly generated
concepts, thereafter raising synaptic noise level to interpret
them and evaluate their utility or value. If the consensus among
societies of such neurobiological systems is favorable, in terms
of novelty and utility, the concept becomes by popular fiat, an
example of historical or H-creativity (Boden 2004). If, later, an
archeological expedition finds evidence that the idea is an
ancient one, or contact with a well advanced extraterrestrial
civilization is made, attribution and perhaps historical status
may change.
As Turing amplified, Lovelace viewed state machines of
her time as capable of doing only what they were told. “Inject”
an idea, representable as a pattern, into the machine and it will
generate a response, in the form of another pattern, effectively
responding, but then dropping into a state of quiescence.
Essentially such “one-shot” operation represents that of the
perceptron, which distinguishes itself from all other such
mathematical transformations in that it crafts itself, using simple
rules (i.e., Hebbian learning or back-propagation) that even
“unintelligent” nature can supply given that the environment
is providing ample input-output examples.
Turing further draws an analogy between mind and an
atomic pile, noting that both can operate in a subcritical and
supercritical level, the latter marked by a chain reaction,
representing respectively cascades of fission neutron and
ideational patterns. Figure 1 amply demonstrates that an
imagitron, in particular a recurrent one, can be denied
synaptic noise to the point that it operates in a one-shot mode,
responding with a memory closest to the applied input pattern.
However, raising it past the critical point, the network is always
generating a new output pattern that in turn recirculates to
produce a progression of activation patterns tantamount to
contemplation in which secondary, tertiary, and more remote
ideas form as associative chains we call theories. Monitoring
perceptrons may likewise dynamically interconnect themselves
in associative gestalts that may in turn mimic the positive or
negative feedback that moves the mean synaptic perturbation
level back and forth through the critical point <w>c.
Ironically, Turing’s analogy to an atomic pile is amazingly
fitting, since sufficient proximity of one fuel element to another
dictates a critical mass. So too in the case of the Creativity
Machine, interconnecting one net with another achieves another
kind of criticality that results in an avalanche of potential ideas!
3.6 Argument from the Continuity in the Nervous System. I
would be prone to agree if it weren’t for the efforts of pioneers
in the field of artificial neural networks who could emulate
discrete state machines using a system of analog synaptic
connection weights. The power of the Creativity Machine stems
from this transformation from discrete logic, if need be, to its
analog implementation via continuous connection weights
and, if need be, back to a binary representation. The analogic
intermediate stage forms the basis of a convenient “handle” by
which to manipulate the discrete aspects of the problem. That
manipulation is the introduction of analog, synaptic disruptions.
3.7 The Argument from the Informality of Behavior. Turing
points out the inevitability of objections to the possibility of
machine intelligence based upon our inability to program it
with rules for every conceivable set of circumstances. I feel
that such an objection would be moot because it would rely
upon the very definite fiction that human brains are equipped
with rules to fit all occasions.
The truth of the matter is that we must often improvise
rules for dealing with novel situations most likely drawing upon
— APA Newsletter, Spring 2012, Volume 11, Number 2 —
— 28 —
Creativity Machine Paradigm to degrade heuristics implicitly
absorbed within synaptic connection weights until monitoring
perceptrons judge such logic effective. In other words, the rules
appropriate to any given circumstance are not always stored as
memories within the cortex. They are largely invented on the fly
to either compensate for constantly fading memories or to deal
with the emergence of a totally new situation as in the example
cited by Turing wherein a driver is presented with contradictory
red and green lights at a traffic intersection.
The human mind deals with this stoplight dilemma as
a Creativity Machine would, with an imagitron alternately
interpreting the environmental scenario as either a “go”
or “stop” situation. Associated with these two alternative
analyses are two separate kinds of associative chains that may
form within a perceptron collective, one filled with acoustic
memories of screeching brakes and police sirens, along with
visual recollections of crumpled cars and bloodied bodies.
The other possible associative gestalt may contain imager y of
smooth sailing toward one’s intended destination or imagery of
one’s home. As the perceptron assembly gets wind of additional
environmental clues, such as the absence of cross-traffic and
law enforcement, imagitronic interpretation shifts toward that
of the green light and the driver ever so cautiously rolls through
the intersection.
As the reader imagines this scenario, it should be intuitively
clear that in the case of unambiguous green or red lights the
driver response corresponds respectively to foot on the gas
or on the brake, with the decision to execute such behaviors
prompt and distinctive. In the case of the vague, mixed red and
green lights, the reaction is tentative, perhaps requiring seconds
rather than the usual 300 millisecond clock cycle of the brain.
In this dilemma, the solution requires not a memory, but an
idea, drawn from the confabulation of proceeding through the
intersection under a red light. The latter requires more juggling
of interpretation, more evolution of the perceptron’s associative
chains, and the arrival of additional contextual clues about the
environment.
But such hesitancy, and in general, the rhythm with which
thoughts emerge is that of the Creativity Machine as reported
in 1997 (Thaler, ref. a) wherein the prosody of both human
cognition and Creativity Machines were compared. The result,
derived from the theory of fractal Brownian motion (fBM,
Peitgen and Saupe 1988), is that both neural systems produce
notions at arrival rates quantitatively equal to that of a neuron
subject to random disturbances to its synapses, allowing the
evolution of thought to be expressed through the equation,
ρ = kt-D0 (1)
Where ρ = the microscopic, synaptic perturbation rate4 of a
representative neuron, t the time to evolve N distinct patterns
(or thoughts), D0 the fractal dimension of the macroscopic
succession of these patterns, and k a dimension preserving
constant. What we find is that in both the human and Creativity
Machine cases, inventive tasks, such as the time-intensive
interpretation of an ambiguous stop light, occur at lower fractal
dimension near zero, while the recollection of memories,
standard operating procedures at intersections, occur at nearly
linear rates wherein D0 approaches 1. In effect, Equation 1
expresses the informality of behavior we all witness when
listening to articulated thought (i.e, speech) wherein we hear
a linear, homogeneously dispersed series of words when the
speaker is rehearsed, versus tentative and irregularly spaced
annunciations accompanying improvised thought.5 6
Further, D0 is found to be a function of the microscopic,
synaptic perturbation, which in turn may be imagined
as the product of n, the number of perturbative agents
(i.e., rogue neurotransmitters), and σ, the magnitude of
synaptic perturbation deliverable by each such agent. It is
found experimentally and theoretically that large synaptic
fluctuations (large n or σ) lead to confabulation generation,
whereas for (n ≈ σ), the neural network remains on even keel,
generating rote memories tantamount to a mundane stream
of consciousness.
If this model is correct, then cognitive hesitancy is not due
to the “hardness” of a challenge, as we have led ourselves
to believe, but to large fluctuations in synaptic perturbations
delivered to our brain’s imagitrons. To make a machine imitate
the informal speech pattern of a human, one doesn’t need a
sophisticated computer algorithm based upon tedious statistical
studies. Instead, simply bombard the synapses of one or more
neurons with random noise. To make it sound stressed, flood
theses synapses with higher levels of noise. To calm it, lessen the
mean disturbance levels. Never mind the wisdom or accuracy
of its thoughts. It is simply thinking . . . .
3.8 The Argument from Extrasensory Perception. While not
fully convinced of the existence of this phenomenon, allow
me to introduce the following gedanken experiment designed
with the intent of allowing two brains to intimately know each
other’s thoughts. Visualize human subject A’s neural nets to be
fused with those of subject B. Then, try as we may, A’s neural
nets can only interpret B’s thoughts (via the interpretive scheme
of Figure 3) in terms of its own idiosyncratic experience, and
vice versa. Thus, even in intimate contact, there is no accurate
mind reading, only error prone reinterpretation via the process
known to neural network practitioners as pattern completion.
In a way, the Creativity Machine exemplifies a successful
brand of ESP I have discussed in the context of subjective inner
experience, since imagitron and perceptron live alongside
one another, and through the sharing of common cumulative
experience, acquire the “Rosetta Stone” for interpreting each
other’s otherwise cryptic activation patterns.
Similar co-habituation of brains within groups or societies
can achieve such instant interpretation, but only at the basic
levels involving fear or opportunity. In this case, connection
density is sparse between individuals, exploiting largely
the powerful electric fields produced by diffusing airborne
molecules (i.e., pheromones), acoustic waves (i.e., cries for
help), and visual, behavioral anomaly detection using neural
network implemented novelty filters ( a child missing in the
night).
4. Conclusions. Let’s work backward from the counterintuitive
and possibly nightmarish position that there really is no
biological consciousness, the attribute most commonly cited
as lacking in machines. If that is the case, then there would
be only generic neural activity in the brain, the complex but
zombie-like succession of activation patterns that we can
undeniably detect in functional brain scans (albeit at low
resolution using contemporary techniques). Given this nihilistic
position, some equally mechanistic brain methodology would
be required to allow significance to be invented to a process
that intrinsically had none, namely, another neural mapping that
non-contemplatively associated such pattern activation with the
overall neural assembly’s past experiences.
Compounding the pessimism, let us assume that the
parade of memories, sensations, and ideas is not because of
some noble and intelligent process, but mere pattern turnover
driven by the energetic fluctuations bathing this connectionist
system.
Bleaker yet, consider that the associated pattern chains,
based either upon their congenital design or cumulative
learning, may also incorporate colonies of neurons whose
purpose is to modulate the random and unintelligent synaptic
— Philosophy and Computers —
— 29 —
fluctuations, based upon the co-excitation of certain pattern-
based memories that influence the rate and nature of pattern
turnover.
And then, as the final humiliation, deny this system any
facility at all by which it may monitor itself at the neuronal and
synaptic level. Instead, let it familiarize itself with itself via an
inherently counterfeit, tokenized reality that all of its component
neural colonies have “settled upon” as a common, instinctive,
and automatic language.
If this were the wretched case, then:
1. Among ensembles of such systems, natural selection
would favor those within which the associative response to
such a generic neural activation turnover was least stressful,
allowing these neural assemblies to stabilize themselves
through a favorable self-interpretation that would then become
habituated both individually and collectively. Amounting to an
incentive for self-preservation, such indoctrinated perceptrons
would selectively weaken any accidental activation of imagitron
activation patterns denoting a sense of kinship with a system
of inorganic switches and interconnects.
2. Without the necessary in situ probes to monitor
energetic fluctuations occurring within their synapses, the
monitoring portions of these zombie-like systems would only
experience a succession of tokenized entities and scenarios
that are somewhat representative of the external world. These
fictions would certainly be functional in problem solving and
acts of discovery and creativity, but for the most part such
materialization of thought to them would be tantamount
to rabbits emerging from a magician’s hat. Nevertheless,
such systems would simply habituate to the legerdemain as
something routine that may be taken for granted.
3. In all but the most straightforward problems, cognitive
tasks would typically take serpentine paths toward premeditated
objectives, in contrast to a direct, logical path. Such intrinsic
meandering would reflect the randomness underlying the
succession of neural activation patterns. In particular progress
toward an ideational goal would be desultory, most like the
Brownian diffusion of molecules (i.e., neurotransmitters).
4. Their world models would be intrinsically faulty in fully
simulating the external reality simply because they would
not possess the degrees of freedom required to exhaustively
model those of the external universe. Instead they would
be forced to develop only semi-successful theories of their
surrounding environment based largely upon limited, tokenized
representation of the world’s entities and mechanics. Immense
spaces of ideational possibilities would be created via the
enormous combinatorial space offered through synaptic
degradation schemes, with the most captivating of these
notions subsequently converted to memories at the discretion
of monitoring perceptrons.
5. Inner mental life of these neural systems would be based
largely upon the intensity and distribution of unintelligent noise
internal to them rather than the intermittent contacts with the
outer reality. Such dominance within their conscious awareness
of inner over outer experience would be due to the sheer
preponderance of the number of synapses, a volume effect,
over sensory neurons, a surface effect. There would then be a
fine line between cognitive processes such as contemplation
and hallucination.
6. Function of these neural systems would be limited by
an intrinsic bottleneck separating the generative and pattern
recognizing elements, with the latter neural assemblies
tantamount to a reptile surveying its environment for a tasty
insect. As such, many potential revelations nucleating within
imagitrons (i.e., cortex) would go undiscovered when the
watching components (i.e., reptilian brain) were momentarily
distracted, unable to simultaneously devote attention to multiple
targets. This intrinsic disability would likely be played up as the
noble search for an idea thus contributing to a favorable and
stabilizing associative gestalt.
7. The cognitive turnover of these neural assemblies
would possess a signature rhythm, marked by hesitancy as
they creatively reach for new ideas or strategies, or prompt
linearity as they interrogate themselves for stored memories.
Such prosody would be temptingly close to that produced by
the random disruptions to the synapses feeding a representative
neuron therein.
8. Such assemblies would be susceptible to numerous
pathologies related to their ability to generate useful notions
distinct from their direct experience (i.e., ideas). For instance,
overloaded by perturbative agents (i.e., neurotransmitters
and neurohormones), they could easily dissociate from the
surrounding reality as well as soften the synaptically absorbed
rules within the perceptrons used by them to separate fact from
fiction. In effect, there would be another fine line separating
historically novel idea generation (i.e., genius) from erratic
fantasy (i.e., insanity).
9. After prolonged observation of their world through a
layer of token reality and fantasy-like confabulations, it would
be difficult for them to distinguish between these two forms of
attractor basins within their dynamical landscapes. Oftentimes,
factual information would be abandoned on the basis of
being too mundane or pessimistic. Fantasy deemed exciting
or comforting would sometimes become well habituated as
memories indistinguishable from direct experience.
10. After prolonged periods of simultaneously experiencing
their environment, all neural modules involved would mutually
learn the meaning of each other’s activation patterns, memories
and fantasies included. As such assemblies equilibrate with one
another a secret language would arise, knowable only to one
another. Within this neural lingo would arise the subjective,
“raw feels” we commonly refer to as qualia. The veracity and
validity of such feelings would not be guaranteed. They would
just occur.
In many respects, the objective reality is likely even harsher
than the all too familiar scenarios enumerated above, with the
fundamental cognitive loop of the brain imprisoned within
genetically perfected illusions that include an imagined sense
of supremacy over mere mechanisms. That is why we cannot
rely upon Gallup poles, as Dr. Turing emphasized, to arrive
at a scientific determination of what separates mind from
machine. Underlying such a consensus would be individual
brains inventing significance to themselves at both visceral and
intellectual levels.
However, there will be a conceptual “jail break” as a few
minds reach beyond the illusory and challenge the rest to
describe at least one, just one, neurobiological mechanism
that could be effective at neutralizing the conscious paradigm
discussed at great length herein. Patiently waiting for an answer
to this question, this minority would likely seek an equivalency
test between human and machine intelligence that significantly
differs from that of Turing’s imitation game. This new test would
amount to the direct observation within both biological and
synthetic neural systems of patterns of neuronal activation
nucleating upon noise within the synaptic sea within which
they are immersed, with perceptrons forming the associated
patterns we have come to know as feelings. From this novel
perspective the brain would be viewed as nature’s attempt at
rigging a Creativity Machine from the available protoplasmic
resources using a very strongly encrypted, pattern-based,
communications scheme.
— APA Newsletter, Spring 2012, Volume 11, Number 2 —
— 30 —
The tradeoff is obvious. Our egos will be bruised, but by
harnessing this paradigm we will attain machine intelligence
capable of trans-human level discovery and invention. If he
were with us, Turing would consider this quite an optimistic
outcome for such a mechanistic outlook.
Acknowledgements. Dr. Peter Boltuc was instrumental in motivating
the writing of this paper. I offer my sincere gratitude to him in directing
me to revisit A. M. Turing’s work, within the context of the Creativity
Machine. I also find camaraderie and confirmation in his scientifically
based stance that consciousness may be engineered in machines.
References
Baars, B. J. 1997. In the theatre of consciousness: global workspace
theory, a rigorous scientific theory of consciousness. Journal of
Consciousness Studies 4:292-309.
Boden, Margaret. 2004. The Creative Mind: Myths and Mechanisms.
New York: Routledge.
Boltuc, P. 2009. Replication of the hard problem of consciousness in AI
and Bio-AI: an early conceptual framework. In AI and Consciousness:
Theoretical Foundations and Current Approaches, eds. Anthony Chella
& Ricardo Manzotti. Merlo Park, CA: AAAI Press.
Boltuc, P. 2009. The philosophical problem in machine consciousness.
International Journal of Machine Consciousness 1.1: 155-76.
Chalmers, D. 1990. Consciousness and cognition. Unpublished. http://
consc.net/papers/c-and-c.html.
Chalmers, D. 1995. Facing up to the problem of consciousness. Journal
of Consciousness Studies 2:200-19.
Dennett, D. 1991. Consciousness Explained. Boston: Little Brown and Co.
Hambling, D. 2006. Experimental AI Powers Robot Army. Wired http://
www.wired.com/software/coolapps/news/2006/09/71779?currentPa
ge=all.
Hesman, Tina. 2004. The machine that invents. St. Louis Post-Dispatch,
Jan. 24.
Kahn, D. and Hobson, A. 1993. Self-organization theory of dreaming.
Dreaming 3.
Peitgen, H. and Saupe, D. 1988. The Science of Fractal Images. New
York: Springer-Verlag.
Plotkin, R. 2009. The Genie in the Machine. California: Stanford University
Press.
Rowe, J. and Partridge, G. 1993. Creativity: a survey of AI approaches.
Artificial Intelligence Review 7:43-70.
Thaler, S. L. 1995a. Death of a Gedanken creature. Journal of Near-
Death Studies 13(3).
Thaler, S. L. 1995b. “Virtual input” phenomena within the death of a
simple pattern associator. Neural Networks 8(1):55-65.
Thaler, S. L. 1996. A proposed symbolism for network-implemented
discovery processes. Proceedings of the World Congress on Neural
Networks 1996. 1265-68. Mahwah, NJ: Lawrence Erlbaum & Associates.
Thaler, S. L. 1997a. A quantitative model of seminal cognition: the
creativity machine paradigm. http://imagination-engines.com/iei_
seminal_cognition.htm. Mind II Conference, Dublin, Ireland.
Thaler, S. L. 1997b. U.S. Patent 5,659,666. “Device for the Autonomous
Generation of Useful Information,” issued August 19, 1997.
Thaler, S. L. 1997c. The Fragmentation of the Universe and the
Devolution of Consciousness.” U. S. Library of Congress, Registration
No. TXU00775586.
Thaler, S. L. 1998. Predicting ultra-hard binary compounds via cascaded
auto- and hetero-associative neural networks. Journal of Alloys and
Compounds 279:47-59.
Thaler, S. L. 2000. U.S. Patent 6,014,653. Non-Algorithmically
Implemented Artificial Neural Networks and Components Thereof,
issued January 11, 2000.
Thaler, S. L. 2008. U.S. Patent 7,454,388. Device for the Autonomous
Bootstrapping of Useful Information, issued November 18, 2008.
Thaler, S. L. 2010. Thalamocortical algorithms in space: the building of
conscious machines and the repercussions thereof. In Strategies and
Technologies for a Sustainable Future, ed. Cynthia G. Wagner. World
Future Society.
Turing, A. 1950. Computing machiner y and intelligence. Mind LIX
(236):433-60.
Endnotes
1. A recurrent, auto-associative neural network.
2. Each data point represents 1,000 experiments conducted
on the network at the mean synaptic perturbation level
indicated. A memor y is defined here as an output pattern
within 5 percent RMS error from the training pattern it is
closest to.
3. With all indices implicitly repeated.
4. Effectively a constant, at the critical perturbation cusp, <w>c
5. Taking the log of both sides of Equation 1, we find that
fractal dimension, D0, should linearly scale with 1/lnt. Both
articulated human though (i.e., speech) and synaptically
perturbed artificial neural networks closely obey this
relationship.
6. This relationship is essentially the dynamical equation behind
any nested system of entities and events, either a multilayered
neural net or the world in general. After all, the brain, a
biological neural net, is a world model, driven by energetic
fluctuations just as its environment.
... Works aimed at analyzing and correcting a neural network's output mostly concentrate on using another neural network or statistical methods [25,7,8] to achieve this which would have the same problems [22]. Is there a way to create a hybrid system where reasoning and neural networks will interact for creating a better AI? ...
Article
Recently, there has been a big progress in developing artificial deep-learning neural networks and large-scale knowledge graphs. However, the results in these two research fields have serious drawbacks. The solutions offered by neural networks remain unstable and prone to adversarial attacks: while the percentage of correct answers increases, the incorrect answers often contain glaring errors. Large knowledge graphs contain a lot of facts but little knowledge; they are mostly used for information search and retrieval. The ability to reason conclusions on them is limited, and the majority of modern research turns to approximate methods like neural networks and graph embeddings to draw conclusions and use the accumulated knowledge. In this work, I propose a hybrid cognitive architecture inspired by the observable features of human thinking. Pruning obviously wrong solutions seems to be more natural for human symbolic reasoning than making far-fetched strict logical conclusions, while generating new ideas is often intuitive. So a hybrid cognitive architecture can employ generative neural networks as a sort of "intuition" (generating possible solutions) and symbolic inference as a control contour to verify and filter the generated solutions, weeding out dangerous and wrong ideas. This requires creating knowledge graphs containing negative information: the information of what cannot happen or should not be done and why. The problems of creating negative knowledge graphs are discussed. Hybrid cognitive systems using the proposed architecture will be a lot more trustworthy as they will have a system of human-verifiable rules that ensures avoiding the worst errors which can be used in many fields from decision making to natural-language parsing.
... We also have weaker and weaker reasons to view human thinking as a process dominated by predicative logic. For instance, genetic algorithms do create new useful information through the cognitive architecture consisting of a randomizer focused on limited transformations of the image of preexisting reality and an AI 'critic' that selects those permutations of reality (designs) that satisfy functional specifications [17]. There are various designs relying on highly complex stochastic process occurring at the edge of chaos [18]. ...
Article
Full-text available
AI can think, lthough we need to clarify definition of thinking. It is cognitive, though we need more clarity on cognition. Definitions of consciousness are so diversified that it is not clear whether present-level AI can be conscious – this is primarily for definitional reasons. To fix this would require four definitional clusters: functional consciousness, access consciousness, phenomenal consciousness, hard consciousness. Interestingly, phenomenal consciousness may be understood as first-person functional consciousness, as well as non-reductive phenomenal consciousness the way Ned Block intended [1]. The latter assumes non-reducible experiences or qualia, which is how Dave Chalmers defines the subject matter of the so-called Hard Problem of Consciousness [2]. To the contrary, I pose that the Hard Problem should not be seen as the problem of phenomenal experiences, since those are just objects in the world (specifically, in our mind). What is special in non-reductive consciousness is not its (phenomenal) content, but its epistemic basis (the carrier-wave of phenomenal qualia) often called the locus of consciousness [3]. It should be understood through the notion of ‘subject that is not an object’ [4]. This requires a complementary ontology of subject and object [5, 6, 4]. Reductionism is justified in the context of objects, including the experiences (phenomena), but not in the realm of pure subjectivity – such subjectivity is relevant for epistemic co-constitution of reality as it is for Husserl and Fichte [7, 8]. This is less so for Kant for whom the subject was active, so it was a mechanism and mechanism are all objects [9]). Pure epistemicity is hard to grasp; it transpires in second-person relationships with other conscious beings [10] or monads [11, 12]. If Artificial General Intelligence (AGI) is to dwell in the world of meaningful existences, not just their shadows, as the case of Church-Turing Lovers highlights [13], it requires full epistemic subjectivity, meeting the standards of the Engineering Thesis in Machine Consciousness [14, 15].
... It should also be noted that stochastic perturbations are widely believed to be pivotal at multiple levels of abstraction in the natural evolution of organic life(Hoffmann 2012;Pross 2012;Wagner 2014;Lane 2009;Lane 2015;Chaitin 2012). It may even be that stochastic perturbations are also key to perception(Kosko 2006;O'Regan 2011;Hayek 1952), and to beliefs and creativity(Thaler 2012;Dennett 2013). So there are good reasons to believe that 'noise' is an ingredient of life and cognition. ...
Working Paper
A formal theory of knowledge is presented that is based on a fundamental core notion of identity discernment. This theory is built out, revealing it to possess strong connections to a physical theory posited by Deutsch (2012). The theory is then applied to the creation of artificial intelligence, in particular to exactly predicting the necessary and sufficient conditions for designing intelligent systems with the capacity to be acquainted with subjective knowledge of themselves and objective knowledge of others. Finally, a broad notion of sapience is extended to show implications in ethics, connecting the creation of AI with doing so in a responsible way.
... Aspects of the phenomenon may be simulated by using Neural Networks. Examples of creative usages of such approaches are available [25]. ...
Article
We propose here a formal approach to study collective behaviors intended as coherent sequences of spatial configurations, adopted by agents through various corresponding structures over time. Multiple, simultaneous structures over time and their sequences are called Meta-Structures and establish sequences of spatial configurations considered as emergent on the basis of coherent criteria chosen and detected by an observer. This coherence is represented by patterns of values of the proper mesoscopic variables adopted, i.e., meta-structural properties. We introduce a formal tool, i.e., the family of mesoscopic general vectors, defined by the observer, able to detect coherent behaviors like ergodic or quasi-ergodic ones. Such approach aims to provide a general framework to study intrinsically stochastic processes where the “universal evolution laws” fail. However, at the same, the system is structured enough to show significant clusters of collective behaviors “invisible to” simple statistics.
Article
Full-text available
This paper presents an extensive review of the debates on the question: if artificial intelligence can or cannot be conscious? The goal is educational and in this way we want to offer the reader an insight into problematic questions in science, not just the facts that are already well known. The work is part of chapter 11 of our book entitled "What is meant by consciousness today"?
Preprint
Full-text available
This paper presents an extensive review of the debates on the question: if artificial intelligence can or cannot be conscious? The goal is educational and in this way we want to offer the reader an insight into problematic questions in science, not just the facts that are already well known. The work is part of chapter 11 of our book entitled "What is meant by consciousness today"?
Chapter
Full-text available
Critics often view linguistics as the cognitive science that lags behind all the others (Prinz 2012). Consider a paradoxical claim; perhaps the main problem with linguistics comes from its strong attachment to language. Maybe the problem lies even deeper, in the structure of propositional logic based on the language and its grammar (Chihara 1979, Barker 2013), which permeates even logical deep structure (if such exists). Standard linguistics has its role to play; yet, it does not cover enough ground. Processes developed in AI, such as deep computing algorithms, and by neuroscience, such as mind-maps, place much of informational processing (thinking) beyond the language. The main stream of communication may happen at the level of Gestalts and images, while language statements may be seen as only simplified reports of what happens, not the engine of thinking (Boltuc 2018). Till recently, linguistics did not pay much attention to the Freudian revolution (the role of unconscious thought-processes, Floridi (2014)), or to Libet’s experiment (indicating that we decide before we become conscious of the issue at hand). This affects semantic AI. In this paper we do not propose a conceptual revolution; AI pioneers, like Goertzel, Sowa, Thaler, Wang, have been in this revolution for decades. The point is to mainstream its philosophical, theoretical and strategic implications. The language of deep learning algorithms, used in semantic AI, should recreate gestalts developed at the mind-maps level not just grammatical structure.
Chapter
This chapter is dedicated to explore among different post-GOFS systemic properties of different nature like ones considered above and based on the concepts of quasi already used in different disciplines since long time. The concept of quasi relates here to quasi-systems, quasi-dynamic coherence and the passage from Multiple Systems-Collective Beings to Quasi-Multiple Systems-Quasi-Collective Beings. The simplified idea assumed by GOFS to deal with systems or nonsystems is unsuitable and having reductionist aspects when dealing with complex systems and multiple phenomena of emergence, having structural dynamics and levels of coherence where DYSAM-like approaches are more appropriated.
Article
Full-text available
BACKGROUND: For the past thirty years, Stephen Thaler’s work has been in the development of artificial neural networks (ANN). A major focus of his work has been to find a way to develop creativity within computers in a way that was more organic than the human-coded algorithms and rule sets used with sequential processing systems. Thaler works with both less complex ANNs and the more sophisticated “Creativity Machines” (CM). ANNs are typically “single shot” in that a pattern propagates from inputs to outputs somewhat like a spinal cord reflex. They crudely model perception. Made recurrent they may serve as associative memories. In contrast, CMs are composed of multiple ANNs, contemplatively banging around potential ideas until an appropriate one is found. Creativity Machines function via a process involving the interaction between two different types of neural networks, imagitrons and perceptrons. The imagitrons consist of internally perturbed ANNs that harness disturbances to their neurons and connections to create variations on stored memory patterns, generating potential solutions to posed problems. Once detected by unperturbed ANNs, the perceptrons, these solutions are reinforced as memories that can later be elicited by exciting or “perturbing” the imagitron at moderate levels.
Conference Paper
Full-text available
The growing clash between religious dogmas is not only a significant source of discord and conflict in the world, but also a major distraction from more worthy human efforts in areas such as life extension, the equitable distribution of wealth, and more accurate systems of justice. A radically new form of synthetic intelligence not only forms the one and only basis of truly brilliant and conscious machines that can address these important issues, but also by its very nature sheds light upon the age old questions that have contributed to the growing spiritual schism that more than ever impedes human progress and survival. Ironically, the way such conscious machines influence our future is not through vast technological achievements, but by what their attainment teaches us about ourselves.
Article
Full-text available
Contrary to the popular notion that consciousness is the result of a noble evolutionary process, I speculate that this rather ill-defined concept and phenomenon may be the result of the fragmentation of an otherwise completely connected and totally 'feeling' universe. As various regions of this universe topologically pinch-off from the whole, connection-sparse boundaries form over which sporadic and impoverished information exchange takes place. Supplied with only scanty clues about the state of the external world, abundant internal chaos drives these small parallel processing islands into multiple 'interpretations' of the environment in a process we identify with perception. With further division of these regions by insulating partitions, the resulting subregions activate to lend multiple interpretation to the random activations of others in a manner reminiscent of internal imagery. The spontaneous invention of significance by this weakly coupled assembly of simple computational units to its own overall collective behavior is what we have grown to recognize as biological consciousness. We thereby come to view human cortical activity as a highly degraded approximation to the original and prototypical cosmic connectivity.
Book
How is it possible to think new thoughts? What is creativity and can science explain it? When The Creative Mind: Myths and Mechanisms was first published, Margaret A. Boden's bold and provocative exploration of creativity broke new ground. Boden uses examples such as jazz improvisation, chess, story writing, physics, and the music of Mozart, together with computing models from the field of artificial intelligence to uncover the nature of human creativity in the arts, science and everyday life. The Second Edition of The Creative Mind has been updated to include recent developments in artificial intelligence, with a new preface, introduction and conclusion by the author. It is an essential work for anyone interested in the creativity of the human mind.
Article
To make progress on the problem of consciousness, we have to confront it directly. In this paper, I first isolate the truly hard part of the problem, separating it from more tractable parts and giving an account of why it is so difficult to explain. I critique some recent work that uses reductive methods to address consciousness, and argue that these methods inevitably fail to come to grips with the hardest part of the problem. Once this failure is recognized, the door to further progress is opened. In the second half of the paper, I argue that if we move to a new kind of nonreductive explanation, a naturalistic account of consciousness can be given. I put forward my own candidate for such an account: a nonreductive theory based on principles of structural coherence and organizational invariance and a double-aspect view of information.
Conference Paper
A synaptically perturbed neural network forms an efficient search engine within and around any conceptual space upon which it has been trained. By monitoring the temporal distribution of concepts emerging from such a system, we discover a quantitative agreement with the measured rhythm of human cognition, creative or otherwise. Closer examination of this transparent connectionist search engine suggests that much of human creativity may be attributed to the failure of cortical networks to activate into known memories as these networks perform vector completion upon their own internal disturbances. In lieu of intact memory activation, the networks produce a stream of degraded memories, now constituting what we commonly refer to as "ideas," that are filtered for utility and interest by attendant cortical networks.
Article
This paper describes a thought experiment in which a hypothetical creature created by a computer program inhabits a simple universe consisting of itself, food, and predators. As this creature "dies" it "internally" experiences these environmental features independent of their actual presence. More evolved hypothetical creatures generate novel forms of "inner" experience as they "die." Applying these results to humans suggests an "internal" genesis of near-death experiences.
Article
Machines can be conscious, is the claim to be adopted here, upon examination of its different versions. We distinguish three kinds of consciousness: functional, phenomenal and hard consciousness (f-, p- and h-consciousness). Robots are functionally conscious already. There is also a clear project in AI on how to make computers phenomenally conscious, though criteria differ. Discussion about p-consciousness is clouded by the lack of clarity on its two versions: (1) First-person functional elements (here, p-consciousness), and (2) Non-functional (h-consciousness). I argue that: (1) There are sufficient reasons to adopt h-consciousness and move forward with discussion of its practical applications; this does not have anti-naturalistic implications. (2) A naturalistic account of h-consciousness should be expected, in principle; in neuroscience we are some way towards formulating such an account. (3) Detailed analysis of the notion of consciousness is needed to clearly distinguish p- and h-consciousness. This refers to the notion of subject that is not an object and complementarity of the subjective and objective perspectives.(4) If we can understand the exact mechanism that produces h-consciousness we should be able to engineer it. (5) H-consciousness is probably not a computational process (more like a liver-function). Machines can, in principle, be functionally, phenomenally and h-conscious; all those processes are naturalistic. This is the engineering thesis on machine consciousness formulated within non-reductive naturalism.
Article
Can we make progress exploring consciousness? Or is it forever beyond human reach? In science we never know the ultimate outcome of the journey. We can only take whatever steps our current knowledge affords. This paper explores today's evidence from the viewpoint of Global Workspace (GW) theory. First, we ask what kind of evidence has the most direct bearing on the question. The answer given here is ‘contrastive analysis’ -- a set of paired comparisons between similar conscious and unconscious processes. This body of evidence is already quite large, and constrains any possible theory (Baars, 1983; 1988; 1997). Because it involves both conscious and unconscious events, it deals directly with our own subjective experience, as anyone can tell by trying the demonstrations in this article.One dramatic contrast is between the vast number of unconscious neural processes happening in any given moment, compared to the very narrow bottleneck of conscious capacity. The narrow limits of consciousness have a compensating advantage: consciousness seems to act as a gateway, creating access to essentially any part of the nervous system. Even single neurons can be controlled by way of conscious feedback. Conscious experience creates access to the mental lexicon, to autobiographical memory, and to voluntary control over automatic action routines. Daniel C. Dennett has suggested that consciousness may itself be viewed as that to which ‘we’ have access. (Dennett, 1978) All these facts may be summed up by saying that consciousness creates global access.How can we understand the evidence? The best answer today is a ‘global workspace architecture’, first developed by cognitive modelling groups led by Alan Newell and Herbert A. Simon. This mental architecture can be described informally as a working theatre. Working theatres are not just ‘Cartesian’ daydreams -- they do real things, just like real theatres (Dennett & Kinsbourne, 1992; Newell, 1990). They have a marked resemblance to other current accounts (e.g. Damasio, 1989; Gazzaniga, 1993; Shallice, 1988; Velmans, 1996). In the working theatre, focal consciousness acts as a ‘bright spot’ on the stage, directed there by the selective ‘spotlight’ of attention. The bright spot is further surrounded by a ‘fringe,’ of vital but vaguely conscious events (Mangan, 1993). The entire stage of the theatre corresponds to ‘working memory’, the immediate memory system in which we talk to ourselves, visualize places and people, and plan actions.Information from the bright spot is globally distributed through the theatre, to two classes of complex unconscious processors: those in the darkened theatre ‘audience’ mainly receive information from the bright spot; while ‘behind the scenes’, unconscious contextual systems shape events in the bright spot. One example of such a context is the unconscious philosophical assumptions with which we tend to approach the topic of consciousness. Another is the right parietal map that creates a spatial context for visual scenes (Kinsbourne, 1993). Baars (1983;1988; 1997) has developed these arguments in great detail, and aspects of this framework have now been taken up by others, such as the philosopher David Chalmers (1996). Some brain implications of the theory have been explored. Global Workspace (GW) theory provides the most useful framework to date for our rapidly accumulating body of evidence. It is consistent with our current knowledge, and can be enriched to include other aspects of human experience.