Conference: Proceedings of the Second IASTED International Conference on Computational Intelligence, San Francisco, California, USA, November 20-22, 2006
Instincts are a vital part of the behavioral repertoire of organisms. Even humans rely heavily on these inborn mechanisms for survival. Many creatures, for example, build elaborate nests without ever learning through experience. This paper explores this evolutionary legacy in the context of an artificial goal-seeking neural network. An instinct is defined as a simple stimulus-response sequence that is triggered by environmental and other events. The well-known "Monkey and Bananas" problem is used as a task situation. Instincts are "hard-wired" neurons in the brain of a monkey. Using a genetic algorithm, a population of monkeys evolved to successfully solve the task that none were able to solve by experience alone. The solutions were also found to be quite adaptable to variations in the task; in fact more so than a hand-crafted solution.
All content in this area was uploaded by Thomas E. Portegys on Apr 16, 2014
Content may be subject to copyright.
A preview of the PDF is not available
... With the advent of evolutionary algorithms in computer science, it is easily envisioned how such solutions may be applied to this problem of devising instinctive behaviors for artificial entities. Stanley, Bryant, and Miikkulainen (2005) and Portegys (2006) have approached this idea of performing genetic algorithms on initial, innate behavior generation. ...
... In a second paper, Portegys (2006) presents his experimental results in evolving an instinctive behavioral set for the Monkey and Bananas problem. The work begins with a general discussion of the realm of instinctive behavior, making the claim that instinct is often used in nature as a more efficient method of deploying learned behaviors, rather than acquiring the knowledge through experiential means. ...
This paper presents the feasibility of implementing an instinct-based behavioral model in an artificial agent. This research serves as a collection of information related to devising an artificially intelligent entity whose decisions are based on human instinct. The treatment of instincts will be defined by the bounds of psychology and ethnology. The research focuses on two concepts: (1) the psychological and ethological implications of instinctive behavior, and (2) the use of instinct-like functional primitives in artificial intelligence systems. The primary goal of this paper is to survey the existing research that may reveal insight in producing a realistic behavior model based on instincts. Such a model would serve as the basis for a human behavior model to be used in simulation and training scenarios.
... The Mona goal-seeking neural network was used for this task. Mona has been shown to be capable of supporting instinct evolution to solve the Monkey and Bananas Problem [7], as well as effectively learning mazes requiring the retention of context information over time [6]. Q-Learning [9], a well-known reinforcement learning technique that is amenable to stimulus-response search space tasks, was used as a comparison to the neural network. ...
Instinct and experience are shown to form a potent combination to achieve effective foraging in a simulated environment. A neural network capable of evolving instinct-related neurons and learning from experience is used as the brain of a simple foraging creature that must find food and water in a 3D block world. Instincts provide basic tactics for unsupervised exploration of the world, allowing pathways to food and water to be learned. The combination of both instinct and experience was found to be more effective than either alone. As a comparison, neural network learning also proved superior to Q-Learning on the foraging task.
Beforehand, the concept of natural motivations (i.e.motivations related to the satisfaction of natural needs) has been generally integrated into reactive agents, and particularly to animats. In this paper, we present and discuss a generic model which introduces such notions into hybrid agents. The basis of our model is the Abraham Maslow's pyramid of needs.
Goal-seeking behavior in a connectionist modelis demonstrated using the examples of foragingby a simulated ant and cooperativenest-building by a pair of simulated birds. Themodel, a control neural network, translatesneeds into responses. The purpose of this workis to produce lifelike behavior with agoal-seeking artificial neural network. Theforaging ant example illustrates theintermediation of neurons to guide the ant to agoal in a semi-predictable environment. In thenest-building example, both birds, executinggender-specific networks, exhibit socialnesting and feeding behavior directed towardmultiple goals.
Evolutionary and neural computation share the same philos- ophy to use biological information processing for the solution of technical problems. Besides this important but rather abstract common foundation, there have also been many successful combinations of both methods for solving problems as applied as the design of turbomachinery components. In this paper, we will introduce evolutionary algorithms primarily for a \neural" audience and demonstrate their usefulness for neural computa- tion. Furthermore, we will introduce a list of some more recent trends in combining evolutionary and neural computation, that will show that syn- ergies between the two elds go beyond the typically quoted example of topology optimisation of neural networks. We strive to increase the aware- ness for these trends in the neural computation community and spark some interest in one or the other of the shown directions.
An important function of many organisms is the ability to use contextual information in order to increase the probability of achieving goals. For example, a street address has a particular meaning only in the context of the city it is in. In this paper, predisposing conditions that influence future outcomes are learned by a goal-seeking neural network called Mona. A maze problem is used as a context-learning exercise. At the beginning of the maze, an initial door choice forms a context that must be remembered until the end of the maze, where the same door must be chosen again in order to reach a goal. Mona must learn these door associations and the intervening path through the maze. Movement is accomplished by expressing responses to the environment. The goal-seeking effectiveness of the neural network in a variety of maze complexities is measured.
We present a set of experiments in which simulated robots are evolved for the ability to aggregate and move together toward a light target. By developing and using quantitative indexes that capture the structural properties of the emerged formations, we show that evolved individuals display interesting behavioral patterns in which groups of robots act as a single unit. Moreover, evolved groups of robots with identical controllers display primitive forms of situated specialization and play different behavioral functions within the group according to the circumstances. Overall, the results presented in the article demonstrate that evolutionary techniques, by exploiting the self-organizing behavioral properties that emerge from the interactions between the robots and between the robots and the environment, are a powerful method for synthesizing collective behavior.
A formal model of distributed building is presented that was inspired by the observation of wasp colonies. Algorithms have
been obtained that allow a swarm of simple agents, moving randomly on a three-dimensional cubic lattice, to build coherent
structures.
We describe a new problem solver called STRIPS that attempts to find a sequence of operators in a space of world models to transform a given initial world model in which a given goal formula can be proven to be true. STRIPS represents a world model as an arbitrary collection in first-order predicate calculus formulas and is designed to work with models consisting of large numbers of formula. It employs a resolution theorem prover to answer questions of particular models and uses means-ends analysis to guide it to the desired goal-satisfying model.
This article presents a novel method for the evolution of artificial autonomous agents with small neurocontrollers. It is based on adaptive, self-organized compact genotypic encoding (SOCE) generating the phenotypic synaptic weights of the agent's neurocontroller. SOCE implements a parallel evolutionary search for neurocontroller solutions in a dynamically varying and reduced subspace of the original synaptic space. It leads to the emergence of compact successful neurocontrollers starting from large networks. The method can serve to estimate the network size needed to perform a given task, and to delineate the relative importance of the neurons composing the agent's controller network.
This paper series focusses on the intersection of neural networks and evolutionary computation. It is addressed to researchers from artificial intelligence as well as the neurosciences.