Chapter

Evolution in Materio

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... An optimisation algorithm manipulates the material's properties within an iterative search by applying different configuration signals. This evolutionary process brings the material to a state where it can perform the desired computation in the sense Figure 1: Concept of evolution in materio [1]. that its outputs can be interpreted according to a prespecified scheme. ...
... Figure 7 shows the output measurements and optimal thresholds for all possible binary inputs. These were obtained from measuring the voltage at pin 10, while the material was constantly charged with the optimal configuration voltages at pins 2,11,8,0,7,9,1,4, and a random sequence of binary triplets was applied to pins 5, 6 and 3. The output measurements between the two thresholds are interpreted as logic 1 and outside of it as 0. ...
Conference Paper
Full-text available
Evolution In Materio (EIM) is concerned with solv- ing computational problems by exploiting the physical prop- erties of materials. This paper presents the results of using a particle swarm optimisation (PSO) algorithm for evolving logic circuits in single-walled carbon nanotubes (SWCNT) based composites on a special custom made platform. The material used is a composite of SWCNT dispersed randomly in a polymer forming a complex conductive network. Follow- ing the EIM methodology the conductance of the material is manipulated for evolving threshold based logic circuits. The problem is formulated as a constrained, mixed integer optimisation problem. It is solved using PSO in conjunction with the shortest position value rule. The results showed that the conductive properties of SWCNT can be used to configure these materials to evolve multiple input/ output logic circuits.
... The assumption that evolution is efficient at creating reservoirs in materio is currently unproven. Configuration through random search has been documented only once in the evolution in materio literature, using Harding's evolvable Liquid Crystal Display [17]. In that work, Harding concluded that using random search alone was not sufficient to create the desired non-linear functions for that particular hardware platform. ...
Conference Paper
Full-text available
Recent work has shown that computational substrates made from carbon nanotube/polymer mixtures can form trainable Reservoir Computers. This new reservoir computing platform uses computer based evolutionary algorithms to optimise a set of electrical control signals to induce reservoir properties within the substrate. In the training process, evolution decides the value of analogue control signals (voltages) and the location of inputs and outputs on the substrate that improve the performance of the subsequently trained reservoir readout. Here, we evaluate the performance of evolutionary search compared to randomly assigned electrical configurations. The substrate is trained and evaluated on time-series prediction using the Santa Fe Laser generated competition data (dataset A). In addition to the main investigation, we introduce two new features closely linked to the traditional reservoir computing architecture, adding an evolvable input-weighting mechanism and a reservoir time-scaling parameter. The experimental results show evolved configurations across all four test substrates consistently produce reservoirs with greater performance than randomly configured reservoirs. The results also show that applying both input-weighting and time-scaling simultaneously can provide additional tuning to the task, improving performance. For one material, the evolved reservoir is shown to outperform – for this task – all other hardware-based reservoir computers found in the literature. The same material also outperforms a simple evolved simulated Echo State Network of the same size. The performance of this material is reported to be both consistent after long time-periods and after reconfiguration to other tasks.
Article
Full-text available
This article defends a modest version of the Physical Church-Turing thesis (CT). Following an established recent trend, I distinguish between what I call Mathematical CT—the thesis supported by the original arguments for CT— and Physical CT. I then distinguish between bold formulations of Physical CT, according to which any physical process—anything doable by a physical system—is computable by a Turing machine, and modest formulations, according to which any function that is computable by a physical system is computable by a Turing machine. I argue that Bold Physical CT is not relevant to the epistemological concerns that motivate CT and hence not suitable as a physical analog of Mathematical CT. The correct physical analog of Mathematical CT is Modest Physical CT. I propose to explicate the notion of physical computability in terms of a usability constraint, according to which for a process to count as relevant to Physical CT, it must be usable by a finite observer to obtain the desired values of a function. Finally, I suggest that proposed counterexamples to Physical CT are still far from falsifying it because they have not been shown to satisfy the usability constraint. • 1 The Mathematical Church–Turing Thesis • 2 A Usability Constraint on Physical Computation • 3 The Bold Physical Church–Turing Thesis • 3.1 Lack of confluence • 3.2 Unconstrained appeals to real-valued quantities • 3.3 Falsification by irrelevant counterexamples • 4 The Modest Physical Church–Turing Thesis • 4.1 Hypercomputation: genuine and spurious • 4.2 Relativistic hypercomputers • 4.3 Other challenges to Modest Physical CT • 5 Conclusion
Article
Full-text available
The enormous contribution of Richard Feynman to modern physics is well known, both to teaching through his famous Feynman Lectures on Physics, and to research with his Feynman diagram approach to quantum field theory and his path integral formulation of quantum mechanics. Less well known perhaps is his long-standing interest in the physics of computation and this is the subject of this paper. Feynman lectured on computation at Caltech for most of the last decade of his life, first with John Hopfield and Carver Mead, and then with Gerry Sussman. The story of how these lectures came to be written up as the Feynman Lectures on Computation is briefly recounted. Feynman also discussed the fundamentals of computation with other legendary figures of the computer science and physics community such as Ed Fredkin, Rolf Landauer, Carver Mead, Marvin Minsky and John Wheeler. He was also instrumental in stimulating developments in both nanotechnology and quantum computing. During the 1980s Feynman re-visited long-standing interests both in parallel computing with Geoffrey Fox and Danny Hillis, and in reversible computation and quantum computing with Charles Bennett, Norman Margolus, Tom Toffoli and Wojciech Zurek. This paper records Feynman's links with the computational community and includes some reminiscences about his involvement with the fundamentals of computing.
Article
Full-text available
In this paper, we review an emerging engineering discipline to program cell beha-viors by embedding synthetic gene networks that perform computation, communications, and signal processing. To accomplish this goal, we begin with a genetic component library and a biocircuit design methodology for assembling these components into compound circuits. The main challenge in biocircuit design lies in selecting well-matched genetic components that when coupled, reliably produce the desired behavior. We use simulation tools to guide circuit design, a process that consists of selecting the appropriate components and genet-ically modifying existing components until the desired behavior is achieved. In addition to such rational design, we also employ directed evolution to optimize genetic circuit behavior. Building on Nature's fundamental principle of evolution, this unique process directs cells to mutate their own DNA until they find gene network configurations that exhibit the desired system characteristics. The integration of all the above capabilities in future synthetic gene networks will enable cells to perform sophisticated digital and analog computation, both as individual entities and as part of larger cell communities. This engineering discipline and its associated tools will advance the capabilities of genetic engineering, and allow us to harness cells for a myriad of applications not previously achievable.
Article
Full-text available
Abstract - In the late 1950's Gordon Pask constructed several electrochemical devices having emergent sensory capabilities. These control systems possessed the ability to adaptively construct their own sensors, thereby choosing the relationship between their internal states and the world at large. Devices were built that evolved de novo sensitivity to sound or magnetic fields. Pask's devices have far-reaching implications for artificial intelligence, self-constructing devices, theories of observers and epistemically-autonomous agents, theories of functional emergence, machine creativity, and the limits of contemporary machine learning paradigms.
Chapter
Full-text available
Artificial evolution as a design methodology for hardware frees many of the simplifying constraints normally imposed to make design by humans tractable. However, this freedom comes at some cost, and a whole fresh set of issues must be considered. Standard genetic algorithms are not generally appropriate for hardware evolution when the number of components need not be predetermined. The use of simulations is problematic, and robustness in the presence of noise or hardware faults is important. We present theoretical arguments, and illustrate with a physical piece of hardware evolved in the real-world (intrinsically evolved hardware). A simple asynchronous digital circuit controls a real robot, using a minimal sensorimotor control system of 32 bits of RAM and a few flip-flops to co-ordinate sonar pulses and motor pulses with no further processing. This circuit is tolerant to single-stuck-at faults in the RAM. The methodology is applicable to many types of hardware, including Field-Programmable Gate Arrays (FPGA's).
Conference Paper
Full-text available
‘Unconstrained intrinsic hardware evolution’ allows an evolutionary algorithm freedom to find the forms and processes natural to a reconfigurable VLSI medium. It has been shown to produce highly unconventional but extremely compact FPGA configurations for simple tasks, but these circuits are usually not robust enough to be useful: they malfunction if used on a slightly different FPGA, or at a different temperature. After defining an ‘operational envelope’ of robustness, the feasibility of performing fitness evaluations in widely varying physical conditions in order to provide a selection-pressure for robustness is demonstrated. Preliminary experimental results are encouraging.
Book
Full-text available
A reaction-diffusion computer is a spatially extended chemical system, which processes information by transforming an input concentration profile to an output concentration profile in a deterministic and controlled manner. In reaction-diffusion computers, the data are represented by concentration profiles of reagents, information is transferred by propagating diffusive and phase waves, computation is implemented via the interaction of these traveling patterns (diffusive and excitation waves), and the results of the computation are recorded as a final concentration profile. Chemical reaction-diffusion computing is among the leaders in providing experimental prototypes in the fields of unconventional and nature-inspired computing. This chapter provides a case-study introduction to the field of reaction-diffusion computing, and shows how selected problems and tasks of computational geometry, robotics, and logics can be solved by encoding data within transient states of a chemical medium and by programming the dynamics and interactions of chemical waves. © Springer-Verlag Berlin Heidelberg 2012. All rights are reserved.
Article
Full-text available
The journeys that could be brought under Non-classical Computation are imposing challenges for computer sciences. Wittgenstein's works on the philosophy of languages, Tractatus, and its relationship to the world can be used as a model of classical computation. The Tractatus proposes that a proposition can have only one complete analysis dependent on its essential features that link to the referent objects. A major result of the Tractatus stance is that every object is potentially unambiguously describable and it provides an extensive model of computer languages.
Article
Full-text available
Mesoscopic organization in soft, hard, and biological matter is examined in the context of our present understanding of the principles responsible for emergent organized behavior (crystallinity, ferromagnetism, superconductivity, etc.) at long wavelengths in very large aggregations of particles. Particular attention is paid to the possibility that as-yet-undiscovered organizing principles might be at work at the mesoscopic scale, intermediate between atomic and macroscopic dimensions, and the implications of their discovery for biology and the physical sciences. The search for the existence and universality of such rules, the proof or disproof of organizing principles appropriate to the mesoscopic domain, is called the middle way.
Article
Full-text available
Natural selection is one of the most important concepts for biology students to understand, but students frequently have misconceptions regarding how natural selection operates. Many of these misconceptions, such as a belief in "Lamarckian" evolution, are based on a misunderstanding of inheritance. In this essay, we argue that evolution instructors should clarify the genetic basis of natural selection by discussing examples of DNA sequences that affect fitness. Such examples are useful for showing how natural selection works, for establishing connections between genetics and evolution, and for creating cognitive conflict within students having misconceptions. We describe several examples of genes that instructors might use during lectures, and present preliminary evidence from our classroom that an evolution curriculum rich in DNA sequences is effective at reducing student misconceptions of natural selection.
Book
Molecular self-assembly is a widespread phenomenon in both chemistry and biochemistry. Yet it was not until the rise of supramolecular chemistry that attention has increasingly been given to the designed self-assembly of a variety of synthetic molecules and ions. To a large extent, success in this area has reflected knowledge gained from nature. However, an increased awareness of the latent steric and electronic information implanted in individual molecular components has also contributed to this success. Whilst not yet approaching the sophistication of biological assemblies, synthetic systems of increasing subtlety and considerable aesthetic appeal have been created. Self-Assembly in Supramolecular Systems surveys highlights of the progress made in the creation of discrete synthetic assemblies and provides a foundation for new workers in the area, as well as background reading for experienced supramolecular chemists.
Book
For many decades, the proponents of `artificial intelligence' have maintained that computers will soon be able to do everything that a human can do. In his bestselling work of popular science, Sir Roger Penrose takes us on a fascinating tour through the basic principles of physics, cosmology, mathematics, and philosophy to show that human thinking can never be emulated by a machine. Oxford Landmark Science books are 'must-read' classics of modern science writing which have crystallized big ideas, and shaped the way we think.
Book
This book provides a multidisciplinary introduction to the subject of Langmuir–Blodgett films. These films are the focus of intense current worldwide interest, as the ability to deposit organic films of nanometre thicknesses has many implications in materials science, and in the development of new electronic and opto-electronic devices. Beginning with the application of simple thermodynamics to the common bulk phases of matter, the book outlines the nature of the phases associated with floating monolayer films. The Langmuir–Blodgett deposition process itself is described in some detail and contrasted with other thin film techniques. Monolayer-forming materials and the structural, electrical and optical properties of Langmuir–Blodgett films are discussed separately. Each chapter is comprehensive, easy to understand and generously illustrated. Appendices are provided for the reader wishing to delve deeper into the physics and chemistry background.
Article
The article describes some of the progress made in the advancement of Langmuir-Blodgett films. Possible uses for these films in integrated optics and IC manufacture are discussed.
Book
This book provides a broad overview of the entire field of DNA computation, tracing its history and development. It contains detailed descriptions of all major theoretical models and experimental results to date, which are lacking in existing texts, and discusses potential future developments. This book will provide a useful reference source for researchers and students, as well as an accessible introduction for people new to the field. The field of DNA computation has flourished since the publication of Adleman's seminal article, in which he demonstrated for the first time how a computation may be performed at a molecular level by performing standard operations on a tube of DNA strands. Since Adleman's original experiment, interest in DNA computing has increased dramatically. This monograph provides a detailed survey of the field, before describing recent theoretical and experimental developments. It concludes by outlining the challenges faced by researchers in the field and suggests possible future directions.
Book
Join the authors on a journey where they describe the possibility of computers composed of nothing more than chemicals. Unlikely as it sounds, the book introduces the topic of 'reaction-diffusion computing', a topic which in time could revolutionise computing and robotics.
Chapter
Liquid Crystal Spatial Light ModulationOptical CorrelationOptical InterconnectsWavelength Tuneable Filters and LasersOptical Neural Networks and Smart PixelsOther ApplicationsReferences
Article
During the last few decades, an extensive development of the theory of computing machines has occurred. On an intuitive basis, a computing machine is considered to be any physical system whose dynamical evolution takes it from one of a set of 'input' states to one of a set of 'output' states. For a classical deterministic system the measured output label is a definite function f of the prepared input label. However, quantum computing machines, and indeed classical stochastic computing machines, do not 'compute functions' in the considered sense. Attention is given to the universal Turing machine, the Church-Turing principle, quantum computers, the properties of the universal quantum computer, and connections between physics and computer science.
Article
An excellent professional reference and a superior upper-level student text, this landmark book offers readers the first comprehensive treatment of all the basic principles underlying the unique physical and optical properties of liquid crystals. Written by a noted pioneer in the nonlinear optics of liquid crystals, it also provides an authoritative, in-depth discussion of the mechanisms and theoretical principles behind all major nonlinear optical phenomena occurring in liquid crystals. Includes an exhaustive treatment of the physical properties and molecular and chemical structures of the thermotropic liquid crystals Examines the theoretical aspects of their isotropic and liquid crystalline phases, including order parameters, elastic constant, Free energy, viscosity and flows, refractive indices, and birefringence Covers new materials such as polymeric liquid crystals, polymer dispersed liquid crystals, dyedoped liquid crystals, and ferroelectric liquid crystals Delineates all known mechanisms for optical nonlinearities in the principal mesophases of liquid crystals Provides a comprehensive summary of all major nonlinear optical phenomena observed in liquid crystals to date
Article
Electrochromic materials have the property of a change, evocation, or bleaching of color as effected either by an electron-transfer (redox) process or by a sufficient electrochemical potential. The main classes of electrochromic materials are surveyed here, with descriptions of representative examples from the metal oxides, viologens (in solution and as adsorbed or polymeric films), conjugated conducting polymers, metal coordination complexes (as polymeric, evaporated, or sublimed films), and metal hexacyanometallates. Examples of the applications of such electrochromic materials are included. Other materials aspects important for the construction of electrochromic devices include optically transparent electrodes, electrolyte layers, and device encapsulation. Commercial successes, current trends, and future challenges in electrochromic materials research and development are summarized.
Article
The emperor's new mind (hereafter Emperor) is an attempt to put forward a scientific alternative to the viewpoint of "strong AI," according to which mental activity is merely the acting out of some algorithmic procedure. John Searle and other thinkers have likewise argued that mere calculation does not, of itself, evoke conscious mental attributes, such as understanding or intentionality, but they are still prepared to accept the action the brain, like that of any other physical object, could in principle be simulated by a computer. In Emperor I go further than this and suggest that the outward manifestations of conscious mental activity cannot even be properly simulated by calculation. To support this view, I use various arguments to show that the results of mathematical insight, in particular, do not seem to be obtained algorithmically. The main thrust of this work, however, is to present an overview of the present state of physical understanding and to show that an important gap exists at the point where quantum and classical physics meet, as well as to speculate on how the conscious brain might be taking advantage of whatever new physics is needed to fill this gap to achieve its nonalgorithmic effects.
Article
Smart windows that regulate the transmission of visible light are well known, but with the continuing interest in modifying the radar signature of military hardware, there is a need also for smart microwave windows and surfaces. The paper reviews progress on the fabrication and characterization of poly(aniline)-silver-polymer electrolyte composite materials. Discs and films of this material have been characterized over the frequency range 0.5-18 GHz. The materials demonstrate a rapid and reversible change in their microwave reflectivity when a small dc potential is applied across them. The best samples have exhibited a reflectivity change in excess of 20 dB in a coaxial line test set. Cyclic voltammetry studies of these composite materials are discussed in the light of a poly(aniline)|polymer electrolyte|silver single-cell model. The effect of the poly(aniline) counter ion, the polymer electrolyte and the application of a bias potential on the dc and microwave results is discussed. Geometries of smart surfaces that might utilize these materials are then proposed and their characteristics are evaluated.
Book
The first edition of Pope and Swenberg’s Electronic Processes of Organic Crystals, published in 1982, became the classic reference in the field. It provides a tutorial on the experimental and related theoretical properties of aromatic hydrocarbon crystals and includes emerging work on polymers and superconductivity. This new edition has been expanded to cover the major theoretical and experimental advances over the last fifteen years. It contains a unified description of what is known in almost every aspect of the field. The basic phenomena covered in the first edition included fluorescence, exciton and charge carrier generation, transport, recombination, and photoemission; the new edition adds solitons, polarons, bipolarons, spin waves, and charge density waves. It provides in-depth coverage of such model polymers such as polyacetylene, polydiacetylene, poly (phenylene-vinylene), polyanilines, polysilanes, and fullerenes. It also provides detailed treatments of the expanding areas of electroluminescence, non-linear optics, organic magnets, organic superconductors, and Langmuir-Blodgett films. In addition, it contains a chapter on major applications, including LED’s, photocopiers, photoconductors, batteries, transistors, liquid crystals, photorefractive devices, and sensors. As in the first volume, the authors take informed positions in controversial areas. This book will be an essential reference for organic material scientists, whether they are experienced researchers or just entering the field. It will also be a reliable guide to anyone interested in this rapidly growing field
Article
Andrew Boucher (1997) argues that ``parallel computation is fundamentally different from sequential computation'' (p. 543), and that this fact provides reason to be skeptical about whether AI can produce a genuinely intelligent machine. But parallelism, as I prove herein, is irrelevant. What Boucher has inadvertently glimpsed is one small part of a mathematical tapestry portraying the simple but undeniable fact that physical computation can be fundamentally different from ordinary, ``textbook'' computation (whether parallel or sequential). This tapestry does indeed immediately imply that human cognition may be uncomputable.
Article
The application of evolution-inspired strategies to hardware design and circuit self-configuration leads to the concept of evolvable hardware (EHW). EHW refers to self-configuration of electronic hardware by evolutionary/genetic algorithms (EA and GA, respectively). Unconventional circuits, for which there are no textbook design guidelines, are particularly appealing for EHW. Here we applied an evolutionary algorithm on a configurable digital FPGA chip in order to evolve analog-behavior circuits. Though the configurable chip is explicitly built for digital designs, analog circuits were successfully evolved by allowing feedback routings and by disabling the general clock. The results were unconventional circuits that were well fitted both to the task for which the circuits were evolved, and to the environment in which the evolution took place. We analyzed the morphotype (configuration) changes in circuit size and circuit operation through evolutionary time. The results showed that the evolved circuit structure had two distinct areas: an active area in which signal processing took place and a surrounding neutral area. The active area of the evolved circuits was small in size, but complex in structure. Results showed that the active area may grow during evolution, indicating that progress is achieved through the addition of units taken from the neutral area. Monitor views of the circuit outputs through evolution indicate that several distinct stages occurred in which evolution evolved. This is in accordance with the plots of fitness that show a progressive climb in a stair-like manner. Competitive studies were also performed of evolutions with various population sizes. Results showed that the smaller the size of the evolved population, the faster was the evolutionary process. This was attributed to the high degeneracy in gene variance within the large population, resulting in a futile search.
Conference Paper
Evolvable hardware (EHW) refers to automated synthesidoptimization of HW (e.g. electronic circuits) using evolutionary algorithms. Extrinsic EHW refers to evolution using software (SW) simulations of HW models, while intrinsic EHW refers to evolution with HW in the loop, evaluating directly the behaviorhesponse of HW. For several reasons (including mismatches between models and physical HW, limitations of the simulator and testing system, etc.) circuits evolved in SW may not perform the same way when implemented in HW, and vice-versa. This portubilify problem limits the applicability of SW evolved solutions, and on the other hand, prevents the analysis (in SW) of solutions evolved in HW. This paper introduces a third approach to EHW called rnixtrinsic EHW (MEHW). In MEHW evolution takes place with hybrid populations in which some individuals are evaluated intrinsically and some extrinsically, within the same generation or in consecutive ones. A set of experiments using a Field Programmable Transistor Array (FPTA) architecture is presented to illustrate the portability problem, and to demonstrate the efficiency of mixtrinsic EHW in solving this problem.
Book
THE FIRST UNIFIED GUIDE TO EVOLVABLE HARDWARE FOR THE PRACTITIONER Evolvable hardware (EHW) is an exciting new field that brings together reconfigurable hardware, artificial intelligence, and fault tolerance in order to design autonomous systems that can self-adapt to compensate for failures or unanticipated changes in their operational environments. Demonstrating a high degree of reliabilityin extreme environments, these systems are finding exciting new applications in the military and space exploration fields. Introduction to Evolvable Hardware: A Practical Guide for Designing Self-Adaptive Systems provides a highly practical introduction for engineers, designers, andmanagers involved in the development of adaptive, high-reliability systems, while at the same time introducing EHW concepts to new researchers in a structured way. The authors cover the fundamentals of simulated evolution and provide an overview ofreconfigurable devices. Real-world digital and analog examples illustrate the power andversatility of EHW. Special emphasis is placed on: Fault-tolerant applications System integration concepts Real-time design issues An ideal resource for anyone interested in applied rather than theoretical research in this growing field, the book unifies the existing literature, which has, until now, only been available in journal articles and conference proceedings, and presents it in such a way that readers may begin applying it to their own research and design projects in a relatively short time.
Book
Genetic algorithms have been used in science and engineering as adaptive algorithms for solving practical problems and as computational models of natural evolutionary systems. This brief, accessible introduction describes some of the most interesting research in the field and also enables readers to implement and experiment with genetic algorithms on their own. It focuses in depth on a small set of important and interesting topics—particularly in machine learning, scientific modeling, and artificial life—and reviews a broad span of research, including the work of Mitchell and her colleagues. The descriptions of applications and modeling projects stretch beyond the strict boundaries of computer science to include dynamical systems theory, game theory, molecular biology, ecology, evolutionary biology, and population genetics, underscoring the exciting "general purpose" nature of genetic algorithms as search methods that can be employed across disciplines. An Introduction to Genetic Algorithms is accessible to students and researchers in any scientific discipline. It includes many thought and computer exercises that build on and reinforce the reader's understanding of the text. The first chapter introduces genetic algorithms and their terminology and describes two provocative applications in detail. The second and third chapters look at the use of genetic algorithms in machine learning (computer programs, data analysis and prediction, neural networks) and in scientific models (interactions among learning, evolution, and culture; sexual selection; ecosystems; evolutionary activity). Several approaches to the theory of genetic algorithms are discussed in depth in the fourth chapter. The fifth chapter takes up implementation, and the last chapter poses some currently unanswered questions and surveys prospects for the future of evolutionary computation. Bradford Books imprint
Article
This book provides a broad overview of the entire field of DNA computation, tracing its history and development. It contains detailed descriptions of all major theoretical models and experimental results to date, which are lacking in existing texts, and discusses potential future developments. It also provides a useful reference source for researchers and students, and an accessible introduction for people new to the field. The field of DNA computation has flourished since the publication of Adleman's seminal article, in which he demonstrated for the first time how a computation may be performed at a molecular level by performing standard operations on a tube of DNA strands.
Article
One historian of technology has called the analogue computer 'one of the great disappearing acts of the Twentieth Century'. This paper will look briefly at the origins, development and decline of the electronic analogue computer ..
Article
The Emperor's New Mind, physicist Roger Penrose's 1989 treatise attacking the foundations of strong artificial intelligence, is crucial for anyone interested in the history of thinking about AI and consciousness. Part survey of modern physics, part exploration of the philosophy of mind, the book is not for casual readers--though it's not overly technical, it rarely pauses to let the reader catch a breath. The overview of relativity and quantum theory, written by a master, is priceless and uncontroversial. The exploration of consciousness and AI, though, is generally considered as resting on shakier ground. Penrose claims that there is an intimate, perhaps unknowable relation between quantum effects and our thinking, and ultimately derives his anti-AI stance from his proposition that some, if not all, of our thinking is non-algorithmic. Of course, these days we believe that there are other avenues to AI than traditional algorithmic programming; while he has been accused of setting up straw robots to knock down, this accusation is unfair. Little was then known about the power of neural networks and behavior-based robotics to simulate (and, some would say, produce) intelligent problem-solving behavior. Whether these tools will lead to strong AI is ultimately a question of belief, not proof, and The Emperor's New Mind offers powerful arguments useful to believer and nonbeliever alike
Article
Using our recently published statistical theory concerning the phase diagram of a low-conducting colloidal suspension subject to uniform electric fields [Phys. Rev. E 52, 1669 (1995)], we examine how the long-range electric-field-induced interparticle interactions affect the spatial arrangement of particles in such a suspension under the action of a nonuniform ac electric field. We find the conditions under which the resulting dielectrophoresis in nonuniform electric fields is accompanied by an electric-field-induced phase transition in the suspension. Moreover we predict that, in the case of positive dielectrophoresis, the particles will form chainlike aggregates aligned parallel to the electric field lines and attracted towards the higher electric-field region; whereas, for negative dielectrophoresis, the particles will form disklike aggregates aligned perpendicularly to the electric-field lines and repelled from the higher electric-field region. The theory also provides some insight regarding the dependence of the particle aggregation on the frequency of the applied nonuniform ac electric field. The predictions of the theory are consistent with the characteristic patterns of cell aggregation observed previously in high-gradient electric fields generated in microelectrode systems.
Article
Computers are physical systems: what they can and cannot do is dictated by the laws of physics. In particular, the speed with which a physical device can process information is limited by its energy and the amount of information that it can process is limited by the number of degrees of freedom it possesses. This paper explores the physical limits of computation as determined by the speed of light c, the quantum scale \hbar and the gravitational constant G. As an example, quantitative bounds are put to the computational power of an `ultimate laptop' with a mass of one kilogram confined to a volume of one liter.