To read the full-text of this research, you can request a copy directly from the author.
Abstract
A critical level of synaptic perturbation within a trained, artificial neural system induces the nucleation of novel activation patterns, many of which could qualify as viable ideas or action plans. In building massively parallel connectionist architectures requiring myriad, coupled neural modules driven to ideate in this manner, the need has arisen to shift the attention of computational critics to only those portions of the neural “real estate” generating sufficiently novel activation patterns. The search for a suitable affordance to guide such attention has revealed that the rhythm of pattern generation by synaptically perturbed neural nets is a quantitative indicator of the novelty of their conceptual output, that cadence in turn characterized by a frequency and a corresponding temporal clustering that is discernible through fractal dimension. Anticipating that synaptic fluctuations are tantamount in effect to volume neurotransmitter release within cortex, a novel theory of both cognition and consciousness arises that is reliant upon the rate of transitions within cortical activation topologies.
To read the full-text of this research, you can request a copy directly from the author.
... Thinking in terms of machine learning systems, researchers have attempted to create models, where training inputs for achieving artificial creativity are represented by poorly defined data sets affected by perturbations and noise [47,48]. The studies of so-created models have revealed that achieving artificial creativity may contradict the standard approach to training an artificial neural network since perturbations associated with creativity interfere with the operation of the network, for instance, by altering the values of its connection weights [47]. ...
Producing original and arranging existing musical outcomes is an art that takes years of learning and practice to master. Yet, despite the constant advances in the field of AI-powered musical creativity, production of quality musical outcomes remains a prerogative of the humans. Here we demonstrate that a single bubble in water can be used to produce creative musical outcomes, when it nonlinearly oscillates under an acoustic pressure signal that encodes a piece of classical music. The audio signal of the response of the bubble resembles an electric guitar version of the original composition. We suggest, and provide plausible theoretical supporting arguments, that this property of the bubble can be used to create physics-inspired AI systems capable of simulating human creativity in arrangement and composition of music.
... So far, the VTL-inspired model of mind has proven consistent with the temporal dynamics of human thought [Thaler, 2014[Thaler, , 2016a, as well as demonstrating the relationship between psychopathologies and creative genius [Thaler, 2016b]. Most importantly though, VTL, and its first implementation in DABUS, demonstrates that it is much more than an AI tool for invention and discovery, being an autonomous synthetic intelligence that absorbs and contemplates its world, its revelations guided, appreciated, and selectively reinforced by its subjective feelings (i.e., sentience). ...
A novel form of neurocomputing allows machines to generate new concepts along with their anticipated consequences, all encoded as chained associative memories. Knowledge is accumulated by the system through direct experience as network chaining topologies form in response to various environmental input patterns. Thereafter, random disturbances to the connections joining these nets promote the formation of alternative chaining topologies representing novel concepts. The resulting ideational chains are then reinforced or weakened as they incorporate nets containing memories of impactful events or things. Such encodings of entities, actions, and relationships as geometric forms composed of artificial neural nets may well suggest how the human brain summarizes and appraises the states of nearly a hundred billion cortical neurons. It may also be the paradigm that allows the scaling of synthetic neural systems to brain-like proportions to achieve sentient artificial general intelligence (SAGI).
... Whichever it may be, the key outstanding issue is the determination of what counts as a 'valuable' novelty. Thaler (1998Thaler ( , 2016a) discusses more specifically artificial creativity driven by neural networks, seeing its source in perturbations and noise. According to him, creativity is the opposite of the 'training' of a networksomething that shakes its operation by altering connection weight values. ...
This article discusses three dimensions of creativity: metaphorical thinking; social interaction; and going beyond extrapolation in predictions. An overview of applications of neural networks in these three areas is offered. It is argued that the current reliance on the apparatus of statistical regression limits the scope of possibilities for neural networks in general, and in moving towards artificial creativity in particular. Artificial creativity may require revising some foundational principles on which neural networks are currently built.
When the connection weights within a trained artificial neural network are randomly perturbed, the network activates through a series of both straightforward memories and hybrids thereof. This proceed, which is called network 'cavitation', thereby elicits a succession of 'internal images' in a fashion reminiscent of what is commonly called 'stream of consciousness'. Here, we lend support to this analogy by comparing the temporal structure of cavitating network outputs with the thought patterns of human volunteers. We discover that both cavitation output and human cognition are nearly identical in their manner of evolution.
A synaptically perturbed neural network forms an efficient search engine within and around any conceptual space upon which it has been trained. By monitoring the temporal distribution of concepts emerging from such a system, we discover a quantitative agreement with the measured rhythm of human cognition, creative or otherwise. Closer examination of this transparent connectionist search engine suggests that much of human creativity may be attributed to the failure of cortical networks to activate into known memories as these networks perform vector completion upon their own internal disturbances. In lieu of intact memory activation, the networks produce a stream of degraded memories, now constituting what we commonly refer to as "ideas," that are filtered for utility and interest by attendant cortical networks.
Recent experiments have shown that a trained chaotic network supervised by an associative net may produce human level discovery and invention. Extending this so-called “Creativity Machine Paradigm” to more complex cascades of chaotic networks not only allows for more ambitious forms of discovery such as juxtapositional invention, but also suggests a nomenclature and a convenient classification scheme for creative human achievements.
Although the definition of the term “creativity” widely varies, recent developments in the field of artificial neural networks (ANN) lend a highly comprehensive model to all accounts of this highly prized cognitive process. From this bottom-up, computational perspective, seminal idea formation results from a noise-driven brain storming session between at least two neural assemblies. In effect, ongoing disturbances both to and within such nets serve to drive a sequence of activation patterns in a process tantamount to stream of consciousness. At sufficiently intense disturbance levels, memories and their interrelationships degrade into false memories or confabulations, any of which could be of potential utility or appeal. If another ANN is provided to make this value judgment, we form an inventive neural architecture called a “Creativity Machine” (U.S. Patents 5,659,666, 7,454,388, and related U.S. divisional and foreign filings). Within such contemplative computational systems, the latter network may be allowed governance over the statistical placement and magnitudes of such disturbances, so as to induce the highest turnover of potentially useful or meaningful confabulations.
According to this simple, elegant, and working model, creativity may be attributed to the failure of biological neural networks to reconstruct memories of direct experience when exposed to nature’s ubiquitous disordering effects, as other “wetware” opportunistically exploits such mistakes and pragmatically perfects the underlying network flaws.
For about 200 years now mathematicians have developed the theory of smooth curves and surfaces in two, three or higher dimensions. These are curves and surfaces that globally may have a very complicated structure but in small neighborhoods they are just straight lines or planes. The discipline that deals with these objects is differential geometry. It is one of the most evolved and fascinating subjects in mathematics. On the other hand fractals feature just the opposite of smoothness. While the smooth objects do not yield any more detail on smaller scales a fractal possesses infinite detail at all scales no matter how small they are. The fascination that surrounds fractals has two roots: Fractals are very suitable to simulate many natural phenomena. Stunning pictures have already been produced, and it will not take very long until an uninitiated observer will no longer be able to tell whether a given scene is natural or just computer simulated. The other reason is that fractals are simple to generate on computers. In order to generate a fractal one does not have to be an expert of an involved theory such as calculus, which is necessary for differential geometry. More importantly, the complexity of a fractal, when measured in terms of the length of the shortest computer program that can generate it, is very small.
By allowing one artificial neural network to govern the synaptic noise injected into another based upon its appraisal of patterns nucleating from such disturbances, a contemplative form of artificial intelligence is formed whose creativity and pattern delivery closely parallels that of human cognition. Drawing upon the theory of fractional Brownian motion, we may derive an equation, verifiable through statistical mechanics, which governs both the novelty and rhythm of pattern turnover within such neural systems. Through this equation, we gain valuable insight into the process of idea formation within the brain, whether that organ is making sense of its environment or itself. In doing so, a relationship between creativity and consciousness is revealed, along with a potential path toward building conscious machine intelligence.
A simple pattern association network, trained to convert any of eight three-bit input patterns to corresponding 3 × 3 pixel patterns, is destroyed by the random pruning of its connection weights. Within such “deaths” we see the frequent appearance of the trained output patterns independent of the application of the corresponding inputs. Such events, which we shall call “virtual inputs, ” increase infrequency when input layer weights are pruned in favor of those in the output layer. We ultimately attribute the virtual inputs to a neural network completion process in which the pattern of zeros produced by the stochastic decay are interpreted by the largely intact hidden and output layers as the application of any of the eight original training input vectors to the input units. After isolating the essential mechanisms producing virtual inputs, we generalize this phenomenon to a wide range ofparallel distributed systems.
When an auto-associative neural network is trained within any conceptual space, the many rules and schema embodied within that knowledge domain are encoded through its many connection weights and biases. For instance, if such a network's input/output exemplars consist of numerous formulas representing known chemical compounds, subsequent network training will produce connection traces that embody the many implicit rules governing the constraints between constituent elements and their allowed proportionalities. That is to say, the net has gained a statistical `insight' into the patterns of bonding, valence, and charge balance that must be observed in theorizing new chemical compounds. If that network is now made chaotic by random perturbation of its processing elements and connection weights, the resulting network activations will represent the formulas of a wide variety of plausible compounds, many of which may be considered novel from the standpoint of network training. We therefore attain an all-neural search engine for generating a stream of plausible chemical possibilities. Adding subsequent `policing' networks to associate these emerging chemical formulas with various physical and chemical properties, we are able to either filter for sought characteristics or alternatively, assemble expanding materials tabulations of potentially new compounds and their estimated properties. Here, we describe the theory, construction, function, and results of just such an autonomous materials discovery machine, tailored specifically to the search for new ultrahard binary compounds.
Jan 1997
S L Thaler
Thaler, S.L. (1997a). U.S. Patent No. 5,659,666, Washington, DC: U.S. Patent and Trademark Office.