Conference PaperPDF Available

Exploiting co-evolution and a modified island model to climb the Core War hill

Authors:

Abstract and Figures

In this paper, Core War, a very peculiar game popular in mid 80's, is exploited as a benchmark to improve the μGP, an evolutionary algorithm able to generate touring-complete, realistic assembly programs. Two techniques were analyzed: coevolution and a modified island model. Experimental results showed that the former is essential in the beginning of the evolutionary process, but may be deceptive in the end. Differently, the latter enables focusing the search on specific region of the search space and lead to dramatic improvements. The use of both techniques to help the μGP in its real task (test program generation for microprocessor) is currently being evaluated.
Content may be subject to copyright.
DRAFT: THIS IS NOT THE FINAL COPY
THE PUBLISHED PAPER HAS BEEN EDITED AND REFORMATTED
DOI: 10.1109/CEC.2003.1299947
Exploiting Co-Evolution and a Modified Island Model to
Climb the Core War Hill
F. Corno, E. Sanchez, G. Squillero
Abstract- In this paper, Core War, a very peculiar game
popular in mid 80’s, is exploited as a benchmark to improve
the μGP, an evolutionary algorithm able to generate Tour-
ing-complete, realistic assembly programs. Two techniques
were analyzed: co-evolution and a modified island model.
Experimental results showed that the former is essential in
the beginning of the evolutionary process, but may be de-
ceptive in the end. Differently, the latter enables focusing
the search on specific region of the search space and lead to
dramatic improvements. The use of both techniques to help
the μGP in its real task (test program generation for micro-
processor) is currently being evaluated.
1 Introduction
While playing has an undeniably role in children develop-
ment, it also helps adult researchers. For instance, in the
game theory the idea of competitors playing a game is used
to understand animal, human, and even gene behavior: any
competitive environment, such as economy, can be viewed
as a game and the metaphor used to explain cooperative and
uncooperative strategies. Also, in the field of evolutionary
computation, games are commonly used to study the mech-
anisms of evolution: a specific game is played many times
by players cultivated in populations to maximize their per-
formances. At the end of the experiments, the results at-
tained by the players measure, indirectly, the effectiveness
of the evolutionary methodologies, selection schemes and
operators adopted.
Similarly, in this paper a game is used to improve the evolu-
tionary approach called µGP. A distinctive game is used to
tweak the method and test new enhancements, while the
final goal is to exploit the approach against a completely
different problem.
µGP is an evolutionary method for generating Touring-
complete, realistic assembly programs. The evolved pro-
grams may take full advantage of the assembly syntax, ex-
ploit the different addressing modes and instruction set
asymmetries. The method is extremely general, but it was
originally devised to generate test programs for micropro-
cessors. Roughly speaking, a test program is an assembly
program devised to extract information from the machine
that executes it, rather than calculating some function or a
value. Test programs may be used to validate the correct-
ness of a microprocessor design, in a process similar to de-
bugging, or to check its functionality after production.
Evolving effective test programs is a challenging task. First
of all, each single evaluation of the fitness function requires
simulating the hardware model of the microprocessor when
it is executing the test program. This process may require
hours even on a fast workstation. Additionally, the problem
is complex and the fitness landscape is completely un-
known, but probably strongly rugged and deceptive.
This paper focuses on Core War, a game played by two or
more programs written in an assembly language called red-
code and run in a virtual computer called Memory Array
Redcode Simulator (MARS). The object of the game is to
cause all processes of the opposing programs to terminate,
leaving the winner in sole possession of the machine. Red-
code programs are usually called “warriors”.
Next section better details the Core War game and the pre-
vious evolutionary attempt to devise effective warriors. Sec-
tion 3 illustrates the proposed framework. Section 4 ex-
plains why Core War was chosen, describes the adopted
enhancements and reports some experimental results. Sec-
tion 5 concludes the paper
2 Core Wars
A game where two or more programs try to kill opponents
by overwriting them was devised by Victor Vyssotsky,
Robert Morris Sr., and Dennis Ritchie in the early 1960s at
Bell Labs. Its name was “Darwin”. About 20 years later, in
1984, D. G. Jones and A. K. Dewdney wrote the “Core War
Guidelines” formalizing the modern Core War. In the same
year, Dewdney’s column in Scientific American [1] popular-
ized the game drawing a huge interest from both the scien-
tific community and hobbyists.
Core War competitions are usually called hills. Classical
hills are repository of N (usually 20) warriors. When a new
program is submitted, it plays a certain number (usually
100) of one-on-one games against each of the N other pro-
grams currently on the hill. The new warrior gets 3 points
for each win and 1 point for each tie, then all scores are up-
dated to reflect battles (the existing programs do not replay
each other, but their previous battles are recalled) and pro-
grams are ranked from high to low. Finally, the least one is
pushed off the hill.
The International Core War Society (ICWS) was also estab-
lished in 1984 for the creation and maintenance of Core War
standards and for running of Core War tournaments. There
have been 6 annual tournaments and 2 standards (ICWS’86
and ICWS’88) and several different servers hosted Core
War tournaments, such as the King of the Hill (KOTH) and
the Pizza Hill. Several authors started competing in the
game, either directly, writing sharp redcode warriors; or
automatically, inducing new ones. At the same time, other
researchers focused on the metaphor of competing organ-
isms in a synthetic environment [2]. A big community sud-
denly has appeared. In 1994 the ICSW proposed the new
Core War standard named ICSW’94. However, the golden
era of Core Wars was almost ended and interest was swiftly
decreasing. Today, the official Core Wars FAQ still main-
tains that the ICSW’94 “is currently being evaluated”.
2.1 Evolutionary Core Wars
Since its appearance, Core War attracted the interest from
the evolutionary-algorithm community. In 1991, John Perry
showed how random code can evolve into successful Core
War warriors in only a few generations1. His paper targeted
the evolution of predatory behavior in computer programs.
More recently, several interesting new approaches have
been devised2. In most of the proposed methods, the first
population of warriors is randomly generated. Then, since
redcode is a very simple and completely orthogonal assem-
bly language (all addressing modes are utilizable with all
instructions, and all instructions are exactly in the same
format), individuals are evolved using very simple genetic
operators.
Major contributions include: Terry Newton’s Redmaker3, an
experimental Core Warrior evolver based on a grid-shaped
1 John Perry, “Core Wars Genetics: the evolution of preda-
tion”, available at: http://www.soberit.hut.fi/tik-76.115/
96-97/palautu-set/groups/DSM/ma/documents/
cwbasics.html
2 http://corewars.sourceforge.net/cgi-bin/control.py?
action=links
3 http://www.tl.infi.net/~wtnewton/corewar/evol/
evolution pool; Martin Ankerl’s Yace4 (Yet Another Core-
war Evolver) and Dave Hillis’s RedRace5 (Red Queen’s
Race). Both approaches start with a random population. In
Yace, warriors fight against each other and the losers are
replaced by slightly modified versions of the winners. In
RedRace, on the other hand, all warriors in the population
compete against all warriors on a target hill. RedRace also
includes sharp techniques for speeding-up the search pro-
cess, saving and restoring effective warriors (Valhalla and
Resurrection) and handling multiple populations.
In 2002, Brian Blaha and Don Wunsch presented a study on
automatic assembly program optimization using Core War
as a case study [3]. They investigate different techniques
and attain interesting results, although, in authors’ own
words, probably unable to devise really effective warriors.
3 µGP
µGP is an evolutionary approach for generating Touring-
complete programs to optimize the solution of a given prob-
lem. The environment can be used for different assembly
languages and diverse metrics. µGP is composed of three
parts: the evolutionary core, the instruction library and the
external evaluator (see Figure 1).
evolutionary
core
test
program
external
evaluator
Fitness
instruction
library
evolutionary
core
test
program
external
evaluator
Fitness
instruction
library
Figure 1:
µ
GP System Architecture
The evolutionary core cultivates a population of individuals.
It uses auto-adaptation mechanisms, dynamic operator prob-
abilities, dynamic operator strength, and variable population
size. The instruction library is used to map individuals to
valid assembly language programs. It contains a highly con-
cise description of the assembly syntax or more complex,
parametric fragments of code. Finally, the external evaluator
simulates the assembly program, providing the necessary
feedback to the evolutionary core.
4 http://students.fhs-hagenberg.ac.at/se/se00001/ yace.html
5 http://users.erols.com/dbhillis/
The instruction library defines the assembly language syn-
tax. It enumerates a set of macros, i.e., fragments of code of
arbitrary length with an arbitrary number of parameters rep-
resented as $n”. A sharp use of the instruction library al-
lows enumerating all valid instructions in a very compact
way. Parameters may be used to encode operands, instruc-
tions and addressing modes, as in Figure 2.
.macro
.probability 6
$1.$2 $3$4+$5, $6$7+$8
.parameter constant add sub div mul mod nop
.parameter constant a b ab ba f x i
.parameter constant # $$ @ * { } < >
.parameter inner_generic_label
.parameter integer 0 799
.parameter constant # $$ @ * { } < >
.parameter inner_generic_label
.parameter integer 0 799
.endmacro
Figure 2: A Macro Describing Several Redcode Instructions
More formally, the instruction library supports different
types of parameters:
Integer: represents a numeric value. The valid range
must be specified.
Constant: represents a string inside a predefined set.
Inner Forward Label: a forward reference.
Inner Backward Label: a backward reference.
Inner Generic Label: a generic reference.
The user may specify explicitly the probability that a ran-
dom node encodes a specified macro. This is necessary to
have equally probable assembly instructions: two compact
macros may encode a different number of elementary in-
structions, and if the macros are equally probable, the as-
sembly instructions are not.
Individuals are represented as directed graphs, where the
structure of the graph encodes the syntactic appearance of
the program and each node corresponds to a macro of the
instruction library (Figure 3).
The µGP evolutionary core exploits a generational strategy.
A population of µ individuals is stored and in each genera-
tion λ genetic operators are applied. Operators are chosen
according to their activation probabilities. Since each genet-
ic operator produces a variable offspring, the number of
individuals at the end of each generation it is not fixed. Af-
ter offspring generation the population is sorted, the best µ
individuals are selected for survival and transferred to the
next generation. The evolutionary core implements both a
crossover (recombination) operator and different mutation
operators. All activation probabilities are variable and are
automatically auto-adapted by the algorithm.
Further details can be found in [4].
4 µCoreWar
As stated before, test program generation is a hard task. The
fitness function is computationally severe and running a
massive set of experiments is practically infeasible. Thus,
evolutionary operators and strategies are tuned resorting
mainly to engineers’ experience and intuition. This situation
is far from ideal.
sub $1, $2
jz
$3
R1
12
jmp $1$2, $3$4
.parameter type ADDR_MODES
.parameter inner_label
.parameter type INC_DEC
.parameter type INT
$
12
$3: >
$4: 35
Figure 3: Individual Representation
Devising a sharp test program for exciting all behaviors in a
microprocessor and devising a strong warrior able to exploit
opponents’ flaws are somewhat similar. Both tasks require
very specific instructions, able to excite very specific corner
cases. For both domains, a small mutation is likely to trans-
form an effective program in a completely useless one.
However, the MARS is extremely fast compared to simulat-
ing a detailed hardware description of a complex micropro-
cessor executing a program. Thus, it is possible to run con-
siderable set of experiments for investigating on new evolu-
tionary strategies.
The modified prototype of the µGP exploited for generating
redcode warriors has been called µCoreWar. Enhancements
developed for climbing Core War hills will be integrated in
the µGP and tested on test program generation tasks.
Experiments were performed trying to climb the evolved
tiny hill hosted by sourceforge6. The hill contains the 20
strongest evolved warriors available on the Internet. Re-
markably, they have been all generated using Redmaker,
Yace, RedRace or customized versions of the these three
programs (except the 99280th marsh herring”, a warrior
devised by a Jaska Tyni’s program).
4.1.1 Co-Evolution
6 http://corewars.sourceforge.net/
Co-evolution is usually defined as evolution involving suc-
cessive changes in two or more ecologically interdependent
species that affect their interactions. In the Evolutionary
Computation field, co-evolution has been advocated to be
particularly useful evolving players [6].
In the evolutionary Core War, the simplest form of co-
evolution can be achieved by co-evolving a population of
warriors and a hill. This technique has some advantages and
one main disadvantage.
First of all, co-evolving the hill is essential in the first gen-
erations of the programs. Random programs are unlike to be
effective warriors, thus their score would be invariably zero
on a regular hill, preventing any evolution.
On the other hand, playing against a co-evolved hill may
bias the fitness function, leading to programs that achieve
good results on the test hill, but poor scores on the target
one.
Another possible solution would have been to include dum-
my warriors in the target hill. Although effective, this solu-
tion targets specifically the Core War environment and can-
not be easily translated to the test program generation do-
main. Thus it was discarded, as were discarded all other
Core War-specific enhancements.
4.1.2 Modified Island Model
In the Island Model, sometimes called Migration Model or
Coarse Grained Model, the population is partitioned into
several distinct subpopulations. Then, these subpopulations
have extensive periods of isolated evolution occasionally
interspersed with migration. Several authors exploited this
mechanism, showing its advantages over canonical single-
population evolutionary algorithms [5].
Different to the standard island model, in the µCoreWar,
different population are used to independently explore dif-
ferent region of the search space. Different populations use
instruction libraries with identical set of macros, but with
different macro probabilities.
Thus, each sub population is polarized to explore a specific
region of the search space. More specifically, three popula-
tions exploit three strategies, while a forth one is not polar-
ized (Table 1). The first population uses all available direct
and indirect addressing modes, with optional pre-increment
and decrement. However, it favors the use of large integer
constants ([0, 799]). The second population is identical to
the first one, except it favors the use of small integer con-
stants ([-8, +8]). The third population, conversely, uses
more probably the MOV (move) and the SPL (split) instruc-
tions.
Population
Strategy
STR1
Favor small integer constants
STR2
Favor large integer constants
STR3
Favor MOV and SPL operations
ALL
No special polarization
Table 1: Strategies
These populations evolve independently for N = 60 genera-
tions, using as parameters µ = 20 and λ = 80. Then, the best
5 individuals of each population are duplicated in the three
others population, and the process is iterated. Parameter N is
reduced during the experiments.
0
1000
2000
3000
4000
5000
020 40 60 80 100
Genera tions
Score
ALL
Simple
STR1
STR2
STR3
Figure 4: Results Attained by Different Approaches
(generations 1 through 100)
When compared to a conventional approach, all these strat-
egies cause a reduction in performances. Figure 4 shows the
results attained by the best individual with the different in-
struction libraries in the first 100 generations. The line la-
beled Simple reports the results of the standard approach,
with no migrations and where all instructions are given the
same probability. Lines labeled STR1, STR2 and STR3
shows results of the three strategies with polarized instruc-
tion libraries. In the first 60 generations, all sub-populations
achieve poor results. Line labeled ALL reports the result of
the forth population, since it uses exactly the same parame-
ters and the same instruction library of the Simple approach,
results before the first migration are identical. However,
values dramatically rise after it.
More interestinglu, Figure 5 reports the result for the com-
plete optimization process of 250 generations. The effect of
co-evolution can be easily seen in all figures. The fitness
values of all experiments are monotonic with respect to the
co-evolved hills, but the scores against the target hill some-
times decrease.
In particular, the first effect of each migration is a reduction
of the fitness values. However, the new genetic material
inserted in the population is quickly exploited to increase
the best warrior.
3000
3400
3800
4200
4600
5000
050 100 150 200 250
Generations
Score
ALL
Simple
Figure 5: Results Attained by Different Approaches
(generations 1 through 250)
There is a direct applicability of these strategies for test
programs generation. All modern microprocessors, such as
SPARC [7], have two different modes of operations: real
(or user) and privileged (or supervisor). The former is in-
tended for executing normal user programs and prohibits
potentially dangerous operations, such as modification of
the processor status register (PSR). The latter, on the other
hand, is intended for executing the operating system code.
Usually, these two modes are physically distinct in the mi-
croprocessor and uses different resources such as different
stack registers. In order to check the correctness of a design,
the μGP must be able to generate appropriate code for both
modes.
As the first experiments showed, this result can be more
efficiently achieved using two sub-populations with polar-
ized instruction library. The first targeting the real mode,
and the second one targeting the protected mode.
On the other hand, the use of a co-evolution mechanism is
currently under study. However, the first results attained
exacerbating the fitness function during evolution are prom-
ising.
Finally, it must be noted that the evolved warriors behaved
reasonably well, getting scores between 4,800 and 5,000,
but no programs was ever able to enter on the hill. Indeed,
the best program on June 2003, rdrc: Dementia Minni”,
was evolved by Dave Hillis with Red Race and scored 5,604
points. Eco 1.3/1020/500”, the 20th one, scored 5,239
points.
5 Conclusions
This paper presented an evolutionary for devising Core War
warriors. Core War has been selected as a benchmark prob-
lem, while the final goal of the research is to explore new
evolutionary strategies applicable to a real problem: test
program generation for microprocessor.
Two techniques where used for the task: co-evolution and a
modified island model.
Experimental results showed that the former is essential in
the beginning of the evolutionary process, but may be de-
ceptive in the end. The latter, on the other hand, enables
exploring different region of the search space and lead to
effective improvements.
The use of both techniques during test program generation
for microprocessor is currently being evaluated.
6 Acknowledgments
Authors wish to thanks to Martin Ankerl, Brian Blaha, Ken
Paul Dolan, Dave Hillis, Terry Newton and Donald Wunsch
for their friendly advices and insightful comments.
7 Bibliography
[1] A. K. Dewdney, “Computer recreations: In the game
called Core War hostile programs engage in a battle
of bits”, Scientific American, 250(5), pp. 14-22, 1984
[2] T. S. Ray, “An evolutionary approach to synthetic
biology: Zen and the art of creating life”, Artificial
Life 1 (1/2), pp. 195-226, 1994
[3] B. Blaha, D. Wunsch, “Evolutionary programming to
optimize an assembly program”, CEC’02: Proceed-
ings of the Congress on Evolutionary Computation,
pp 1901-1903, 2002
[4] F. Corno, G. Squillero, “An Enhanced Framework for
Microprocessor Test-Program Generation”, EU-
ROGP2003: 6th European Conference on Genetic
Programming, Essex (UK), April 14-16, 2003, pp.
307-315
[5] F. Fernández, M. Tomassini, J. M. Sánchez, “Exper-
imental study of Isolated Multipopulation Genetic
Programming”, Proceedings of the 26th Annual Con-
ference of the IEEE Industrial Electronics Society,
IEEE Press, Vol. 1697, pp. 2672-2677, 2000
[6] J. E. Davis, G. Kendall, “An investigation, using co-
evolution, to evolve an Awari player”, Proceedings
of the 2002 Congress on Evolutionary Computation,
pp. 1408-1413, 2002
[7] SPARC International, The SPARC Architecture
Manual
... In 2003, Corno, Sanchez and Squillero used corewar as a test bench to enhance the µGP, a generic assembly-level program generator [4]. The game was exploited to evaluate the effectiveness of new evolutionary methodologies: the results attained by evolved warriors were used as a feedback for the adopted selection schemes and operators. ...
... This paper is an enhancement of [4] and it shares the same goal: exploit a game to investigate new evolutionary techniques. It presents a new migration model that exploits the polarization effect and a new hierarchical coarse-grained approach applicable whenever the final goal can be seen as a combination of semi-independent sub goals. ...
... According to the suggestion of [5], the instruction library for the corewar has been rewritten from scratch. While in [4] a single macro may encode several different redcode instructions as in Figure 2, in the instruction library adopted each macro corresponds to one single redcode instruction. ...
Conference Paper
Full-text available
This paper analyzes corewar, a very peculiar computer game popular in mid 80's where different programs fight in the memory of a virtual computer. The μGP, an evolutionary assembly-program generator, is used to evolve efficient programs, and the game is exploited to evaluate new evolutionary techniques. The paper introduces a new migration model that exploits the polarization effect and a new hierarchical coarse-grained approach applicable whenever the final goal can be seen as a combination of semi-independent sub goals. Additionally, two very general enhancements are proposed. Analyzed techniques are orthogonal and broadly applicable to different real-life contexts. Experimental results show that all these techniques are able to outperform a previous approach.
... The genetic programming literature consistently cites the importance of maintaining diversity as being crucial in avoiding convergence toward local optima and there are several different possible strategies to promote diversities, including non-standard selection, mating, or a replacement strategy. Indeed, µGP performances were already enhanced exploiting coarse-grained approaches, and such geographical distributions of individual are known to promote diversity [30] [31]. ...
... Evolved CoreWar programs are still superior to humancoded programs in some of the basic confrontation environments. However, these evolvers have failed to produce programs of higher complexity and there is usually little diversity in the evolved sets [3]. Program optimization does not have to be related to changes in the code structure. ...
Article
Full-text available
In this paper, an optimizer for programs written in an assembly-like language called Redcode is presented. Relevance of code optimization in evolutionary program creation strategies and code categorization is discussed. CoreWar Optimizer is the first user-friendly optimization tool for CoreWar programs offering various optimization methods and a carefully picked benchmark. The methods at the user's disposal are: random, modified hill climbing algorithm, simulated annealing, predator-prey particle swarm optimization and genetic algorithms. All these methods use a speed-up trick which drives the value optimization across three fitness landscapes instead of just one.
... Exploration and generation of CoreWar warriors, assisted by computers, has become increasingly popular in the recent years. Majority of work, however, has concentrated on warrior parameter optimization [7] and the evolution of competitive warriors [8,5,9]. Exploratory analysis (albeit motivated by warrior evolution), by means of automatic categorization based on the analysis of execution frequencies of certain instruction types during simulation, was performed, with some results available in [2], but with no published findings. ...
... Starting from perhaps the earliest efforts of Samuel (1959) with checkers (also known as draughts), research has been conducted in various forms of computational intelligence (e.g., neural networks and evolutionary computation) for a wide variety of n-player zero-sum and nonzero-sum games. A complete review of these efforts is beyond the scope of this paper; however the breadth of study can be appreciated by reviewing the numerous contributions to games research using the iterated prisoner's dilemma (e.g., Axelrod, 1987;Fogel, 1993;Fogel, 1995;Harrald and Fogel, 1996;Darwen and Yao, 2000;Chong and Yao, 2004;Chong and Yao, 2005;Ishibuchi and Namikawa, 2005;Franken and Engelbrecht, 2005 and others), general game theory and evolutionary stable strategies ( Fogel et al., 1997;Fogel et al., 1998;Fogel and Beyer, 2000;Ficici et al., 2005 and others), board games such as checkers Fogel, 1999a,b, 2001;Fogel, 2002;Hughes, 2003;Franken and Engelbrecht, 2003;Cho, 2003, 2005;Hughes, 2005), chess ( Kendall and Whitwell, 2001;Fogel et al., 2004a,b), Othello (Moriarty and Miikkulainen, 1995), backgammon (Pollack and Blair, 1998;Darwen, 2001), versions of Go ( Richards et al., 1998Richards et al., , 2001Kendall et al., 2004;Runnarson and Lucas, 2005;Lubberts and Miikkulainen, 2001), RISK ( Vaccaro and Guest, 2005), Monopoly (Frayn, 2005), and other games such as core wars ( Corno et al., 2003Corno et al., , 2004Corno et al., 2005a,b), card games ( Kendall and Smith, 2003;Fogel, 2004), combat and other video games ( Gallagher and Ryan, 2003;Stanley et al., 2005;Louis and Miles, 2005;Hong and Cho, 2005;Tanev et al., 2005;Togelius and Lucas, 2005;Parker et al., 2005), and many others (see also Kendall and Lucas, 2005;Fogel et al., 2005). ...
Article
Entertainment software developers face significant challenges in designing games with broad appeal. One of the challenges concerns creating nonplayer (computer-controlled) characters that can adapt their behavior in light of the current and prospective situation, possibly emulating human behaviors. This adaptation should be inherently novel, unrepeatable, yet within the bounds of realism. Evolutionary algorithms provide a suitable method for generating such behaviors. This paper provides background on the entertainment software industry, and details a prior and current effort to create a platform for evolving nonplayer characters with genetic and behavioral traits within a World War I combat flight simulator.
Article
Full-text available
The development of computer-generated ecosystem simulations are becoming more common due to the greater computational capabilities of machines. Because natural ecosystems are very complex, simplifications are required for implementation. This simulation environment offer a global view of the system and generate a lot of data to process and analyse, which are difficult or impossible to do in nature. 3D simulations, besides of the scientific advantages in experiments, can be used for presentation , educational and entertainment purposes too. In our simulated framework (Animal Farm) we have implemented a few basic animal behaviors and mechanics to observe in 3D. All animals are controlled by an individual logic model, which determines the next action of the animal, based on their needs and surrounding environment.
Article
Full-text available
Core War is a game where two or more programs, called warriors, are executed in the same memory area by a time-sharing processor. The final goal of each warrior is to crash the others by overwriting them with illegal instructions. The game was popularized by A. K. Dewdney in his Scientific American column in the mid-1980s. In order to automatically devise strong warriors, μGP, a test program generation algorithm, was extended with the ability to assimilate existing code and to detect clones; furthermore, a new selection mechanism for promoting diversity independent from fitness calculations was added. The evolved warriors are the first machine-written programs ever able to become King of the Hill (champion) in all four main international Tiny Hills. This paper shows how playing Core War may help generate effective test programs for validation and test of microprocessors. Tackling a more mundane problem, the described techniques are currently being exploited for the automatic completion and refinement of existing test programs. Preliminary experimental results are reported.
Article
Reviews the performance of computer programs for writing poetry and prose, including MARK V. SHANEY, MELL, POETRY GENERATOR, THUNDER THOUGHT, and ORPHEUS. Discusses the writing principles of the programs. Provides additional information on computer magnification techniques. (YP)
Article
Evolutionary programming was used to attempt to optimize a program written in the pseudo-assembly language Redcode, invented by A.K. Dewdney. Corewars is the game under which Redcode programs compete. Since 1994, the last standardization of Redcode, many complicated, effective Redcode programs have been written by people, but intense study is required to learn the nuances of the language and perfect programs. Since this is such a difficult task, evolutionary techniques may outperform humans. Multiple point, variable length crossover and change, insert, and delete mutations were the operators used. Relative fitnesses were calculated within a subset of the population on remote client computers. A food model was used to select the most fit programs. Current results are preliminary, but already one of the resulting programs wins 38% and ties 29% against a common type of human-written program. The best performance is 151 wins, 49 losses, and 0 ties against a typical human program.
Conference Paper
Test programs are fragment of code, but, unlike ordinary application programs, they are not intended to solve a problem, nor to calculate a function. Instead, they are supposed to give information about the machine that actually executes them. Today, the need for effective test programs is increasing, and, due to the inexorable increase in the number of transistor that can be integrated onto a single silicon die, devising effective test programs is getting more problematical. This paper presents µGP, an efficient and versatile approach to test-program generation based on an evolutionary algorithm. The proposed methodology is highly versatile and improves previous approaches, allowing the test-program generator generating complex assembly programs that include subroutines calls.
Conference Paper
Awari is a two-player game of perfect information, played using 12 "pits" and 48 seeds or stones. The aim is for one player to capture more than half the seeds. In this work we show how an awari player can be evolved using a co-evolutionary approach where computer players play against one another, with the strongest players surviving and being mutated using an evolutionary strategy (ES). The players are represented using a simple evaluation function, representing the current game state, with each term of the function having a weight which is evolved using the ES. The output of the evaluation function is used in a mini-max search. We play the best evolved player against one of the strongest shareware programs (Awale) and are able to defeat the program at three of its four levels of play
Conference Paper
In this paper we present results obtained when comparing classic genetic programming (GP) with the isolated multipopulation version. Our first discovery was that sometimes, given a certain number of individuals, it is useful to distribute them among several populations even when no communication is allowed. This consequently lead to research concentrating on three main questions: firstly, how to distribute individuals according to the problem in hand; secondly, how many populations must be employed in proportion to the effort and fitness involved when solving a problem; and finally, how to use isolated multipopulation GP in the classification of problems
Article
Our concepts of biology, evolution, and complexity are constrained by having observed only a single instance of life, life on Earth. A truly comparative biology is needed to extend these concepts. Because we cannot observe life on other planets, we are left with the alternative of creating artificial life forms on Earth. I will discuss the approach of inoculating evolution by natural selection into the medium of the digital computer. This is not a physical/chemical medium, it is a logical/informational medium. Thus these new instances of evolution are not subject to the same physical laws as organic evolution (e.g., the laws of thermodynamics), and therefore exist in what amounts to another universe, governed by the“physical laws” of the logic of the computer. This exercise gives us a broader perspective on what evolution is and what it does. An evolutionary approach to synthetic biology consists of inoculating the process of evolution by natural selection into an artificial medium. Evolution is then allowed to find the natural forms of living organisms in the artificial medium. These are not models of life, but independent instances of life. This essay is intended to communicate a way of thinking about synthetic biology that leads to a particular approach: to understand and respect the natural form of the artificial medium, to facilitate the process of evolution in generating forms that are adapted to the medium, and to let evolution find forms and processes that naturally exploit the possibilities inherent in the medium. Examples are cited of synthetic biology embedded in the computational medium, where in addition to being an exercise in experimental comparative evolutionary biology, it is also a possible means of harnessing the evolutionary process for the production of complex computer software.
Article
Awari is a two-player game of perfect information, played using 12 "pits" and 48 seeds or stones. The aim is for one player to capture more than half the seeds. In this work we show how an awari player can be evolved using a co-evolutionary approach where computer players play against one another, with the strongest players surviving and being mutated using an evolutionary strategy (ES). The players are represented using a simple evaluation function, representing the current game state, with each term of the function having a weight which is evolved using the ES. The output of the evaluation function is used in a mini-max search. We play the best evolved player against one of the strongest shareware programs (Awale) and are able to defeat the program at three of its four levels of play.
Evolutionary programming to optimize an assembly program
  • B Blaha
  • D Wunsch
B. Blaha, D. Wunsch, "Evolutionary programming to optimize an assembly program", CEC'02: Proceedings of the Congress on Evolutionary Computation, pp 1901-1903, 2002