Conference PaperPDF Available

Variable Length Genomes for Evolutionary Algorithms.



A general variable length genome, called exG, is developed here to address the problems of xed length representations in canonical evo- lutionary algorithms. Convergence aspects of exG are discussed and preliminary results of exG usage are presented. The generality of this variable legnth genome is also shown via comparisons with other variable length rep- resentations.
Variable Length Genomes for Evolutionary Algorithms
C.-Y. Lee
Department of Mechanical Engineering
California Institute of Technology
Pasadena, CA 91125
E.K. Antonsson
Department of Mechanical Engineering
California Institute of Technology
Pasadena, CA 91125
A general variable length genome, called exG,
is developed here to address the problems of
fixed length representations in canonical evo-
lutionary algorithms. Convergence aspects of
exG are discussed and preliminary results of
exG usage are presented. The generality of
this variable legnth genome is also shown via
comparisons with other variable length rep-
Evolutionary algorithms (EA’s) are robust, stochastic
optimizers roughly based on natural selection evolu-
tion. The idea is to have a population of solutions
breed new solutions, using stochastic operators, from
which the ‘fittest’ solutions are chosen to survive to
the next generation. This procedure is iterated un-
til the population converges. In most instances, the
EA will converge to the global optimum. Due to
their robustness and general applicability, EA’s have
found widespread use in a variety of different appli-
cations. However, typical EA’s have difficulty per-
forming adequate searches in spaces of varying di-
mensionality. The primary reason for this difficulty
is that the prevalent EA’s, genetic algorithms (GA’s)
and evolutionary strategies (ES’s), use fixed length en-
codings/parametrizations, called genomes, of solution
As an example, take the problem of optimizing neu-
ral network topology. Briefly, neural network topology
consists of layers of nodes whose inputs and outputs
are interconnected. Finding the optimal topology re-
quires the determination of optimal number of nodes
and interconnections. If a fixed length representation
is used, a limit on the maximum number of possible
nodes and interconnections must be set. Therefore,
the search space is limited to a small subset of the
complete solution space.
To remediate these shortcomings, a general variable
length representation for use in EA’s was developed
and is presented in this paper along with preliminary
results. We follow this brief introduction with a review
of GA’s and ES’s in Section 2. Section 3 discusses
stochastic evolutionary operators and the theory be-
hind variable length genomes is laid out in Section 4.
The development of a novel variable length genome,
called exG, is outlined in Section 5, followed subse-
quently by preliminary results of EA’s using exG’s.
We exhibit exG generality in Section 7 and conclude
the paper with a brief summary.
While both GA’s and ES’s use string representations
of solution space, termed genomes, there are many
differences between the two. In particular, they dif-
fer in regards to coding schemes, stochastic opera-
tors, and selection criteria. These differences can be
traced back to their origins and are discussed subse-
quently. More thorough reviews on EA’s can be found
in[2,3,5,8,10,11]. NotethatthetermEAisnow
used as an umbrella term for GA’s and ES’s.
2.1 Genetic Algorithms
Genetic algorithms were introduced in the early 1970’s
by Holland as a manifestation of computational sys-
tems modelling natural systems. A binary encoding
of solution parameters was chosen due to their sim-
plicity and facility in computation. In analogy with
naturally occurring genetic operators, mutation and
crossover were introduced as the search mechanisms.
Briefly, mutation is a point operator that switches the
value of a single bit in a genome with low probability.
Crossover, on the otherhand, is a recombination op-
erator that interchanges blocks of genes between two
parents with relatively high probability. A variety of
selection schemes are used in GA’s, mostly based on
fitness proportional or tournament type methods.
2.2 Evolutionary Strategies
Evolutionary strategies were introduced in the late
1960’s by Schwefel and Rechenberg for finding optima
of continuous functions, and, hence, used real-valued
strings. As compared to GA’s, ES’s were not designed
with natural evolutionary systems in mind. In fact, the
first ES implementations did not use populations. The
search mechanism was confined to mutation like opera-
tors. Mutation in ES’s involves modifying each gene in
a genome with a zero mean, σ2variance random vari-
able. Later developments led to use of populations and
self-adapting mutation variances. With the advent of
population searches, the (µ, λ)and(µ+λ) selection
schemes were introduced. Here, µis the number of
parents and λis the number of offspring, with λµ
in most cases. The ‘,’ denotes that new population
members are selected only from the offspring, whereas
the ‘+’ denotes that new members are selected from
the combined pool of parents and offspring.
Details of canonical mutation and crossover operators
are discussed here with the help of examples. For clar-
ity, we use binary genomes in the examples.
Mutation is a point operator that randomly perturbs
the value of a single gene. For example, mutation of
the first gene in the genome (1 00101110)re-
sults in (0 00101110). Thebenetsofmutation
differ according to whether a GA or ES is being imple-
mented. In GA’s, it is believed that mutation only has
a secondary effect on convergence by allowing popula-
tions to escape from local optima [4]. In ES’s, though,
mutation plays a vital role in the efficient search of
continuous spaces, since it is the lone operator in ES’s.
Crossover is a recombination operator that swaps
blocks of genes between genomes. In the example be-
low, a two point crossover is shown. Here, the sixth
and eighth gene positions are chosen as the range over
which genes are to be swapped (|’s demarcate the two
crossover points). The idea behind crossover is to allow
good building blocks to travel from genome to genome,
which then leads to very efficient searches of solution
before crossover
after crossover
While most genomes in EA’s all have the same, fixed
length throughout an EA run, a variety of meth-
ods have been developed to alter genome lengths
within and between generations. These variable length
genomes (VLG’s) have had varying success in specific
applications. Nonetheless, VLG’s have been charac-
terized by Harvey in SAGA, or Species Adaptation
Genetic Algorithms [6, 7]. The results and conclu-
sions of SAGA are reviewed here, beginning with a
brief discussion on fitness surfaces.
4.1 SAGA and Its Results
The fitness surface of a parametrized search space can
be characterized by its correlation. Completely uncor-
related landscapes can be imagined as rough surfaces,
where a point and its neighbors have dissimilar fit-
nesses; conversely, correlated landscapes are smooth
surfaces, where fitness variations between a point and
its neighbors are small. Kauffman [9] has shown that
any adaptive walk seeking an optimum across a com-
pletely uncorrelated landscape will suffer from doubled
waiting times between each successful step. The rea-
son being that, on average, after every successful step,
the portion of the solution space that is more fit is
Most fitness landscapes have some locally correlated
neighborhood such that a correlation length can be
assigned depending on the magnitude of locality. In
terms of EA’s, length is defined to be a distance met-
ric of how many point mutations are required to trans-
form from one genome to another. Harvey then defines
a ‘long jump’ to be a transformation whose length is
greater than the correlation length [6, 7]. It follows
that search over a fitness surface with long jumps will
suffer from doubled waiting times. Taken in the con-
text of variable length genomes, long jumps are large
length changes. These observations lead to the follow-
ing characteristics of EA’s using VLG’s:
Long jumps lead to large length fluctuations in
early populations. However, as fitness increases,
waiting times will double and long jumps will
After long jumps have ceased, small length change
operators are able to effectively search the locally
correlated neighborhood. In the limit that length
changes cease, the EA reduces to a standard EA.
If genomes can increase in length indefinitely due
to selection pressures, it will only occur very grad-
ually such that at any generation all genomes have
similar lengths.
The question remains, however, as to what sort of
length-changing operators are viable and acceptable.
A mutation-like operator could change lengths by ran-
domly inserting or deleting a gene. By stringing sev-
eral of these operations together, larger length changes
can be made. This brings up the question of how the
proper decoding of a genome is insured. Similarly,
for length changing crossover operators, only homol-
ogous or similar regions can be interchanged between
genomes to ensure valid decoding. For example, the
genes for the trait height can’t be swapped with genes
for the trait weight. The upshot of the above discus-
sion is that some sort of identification string or tem-
plate is required to keep track of how a length changing
genome is decoded.
In this section, we develop and explore certain as-
pects of a general variable length genome, entitled
exG. We begin by revisiting some properties of canon-
ical genomes.
5.1 Canonical Genomes Revisited
The canonical genome in EA’s can be rewritten as a
two-stringed genome, with the first string containing
the encoding and the other the index, or name, of each
gene. For example, if we name our genes by position,
we have the following two string genome:
encoding: 100101111
index: 123456789
No doubt, we have stumbled upon an identifying
string, which, as mentioned in Section 4.1, is a nec-
essary characteristic of any variable length genome.
Taking this cue, we extend the canonical genome to a
variable length representation.
5.2 A General Variable Length Genome
We start by using the two-stringed genome representa-
tion. However, instead of using integers as gene names,
we use real numbers. As in canonical genomes, order-
ing is determined based the identifying string. Below
is an example of such a genome.
.1.12 .
If the identifying string is allowed to mutate according
to an ES like addition of a Gaussian variable, single
point reorderings are possible. For example, if the sec-
ond identifying gene in the above genome is mutated
by a -0.03, after re-ordering, the genome becomes
.09 .
Evidently, larger reorderings, like inversion, can occur
if several genes are mutated simultaneously. Studies
have shown that such reorderings are often effective
in solving ‘EA hard’ fitness surfaces, which often arise
due to poor choice of coding schemes.
Crossover is more subtle, and is used as the length
changing operator in exG’s. The idea, as in GA theory,
is that crossover allows the transfer of good building
blocks between genomes. So, how is crossover accom-
plished? Well, a range of identifying values is chosen
over which to swap sections between genomes. For
example, the two genomes before crossover are
1st genome:
2nd genome:
0 1 0011
|.11 .15|.81 .85 .89 .9
and after crossover with the identifying range .11 to
.70, the genomes become
1st genome:
10 111
.1|.11 .15|.8.9
2nd genome:
0 0101 1 0 0 1 1
|.|.81 .85 .89 .9
Notice that, in the limit of stationary identifying
genes, the extended genome (exG) and its operators
are equivalent to any canonical fixed length genome.
We follow the development of exG with discussions on
crossover and exG convergence issues.
5.3 Crossover and Mutation for VLG’s
Other VLG approaches have introduced mutation-like
operators to change genome length. All have imple-
mented point operators that randomly insert or delete
a single gene. Obviously, larger length changes can be
accomplished through multiple application of such op-
erators. So, why are these mutation-like operators not
introduced in the context of exG’s?
The answer is that crossover, as implemented in exG’s,
is able to duplicate any mutation-like operation, obvi-
ating the need for them. We prove this by showing
how crossover can randomly insert or delete a single
gene (since any mutation-like length changing operator
can be decomposed into these simple actions). Imag-
ine that after a particular crossover, the first genome’s
length decreases by one. Then, the second genome’s
length must increase by one. In effect, a simultaneous
insertion and deletion has occurred, proving our claim.
5.4 Brief Discussion on exG Convergence
We now make a few remarks on the effects of exG
crossover on EA convergence. First, note that the sum
of genome lengths is always conserved, such that the
range of possible lengths after crossover is 0 to n+m.
Then, under the assumption that identifying genes are
uniformly distributed, the probability of choosing a
crossover of length lfor the first parent, while requiring
l>0, is
i=1 i(1)
which is simply the number of ways to get length l,
divided by the total number of possible lengths. plis
the probability that the crossover length is land nis
the initial genome length. We can infer from Equa-
tion 1 that smaller length changes are more probable
than larger changes.
Given the assumption of uniformly distributed identi-
fying genes, the second genome crossover length is
where liis the ith genome’s crossover length, mis the
second genome’s length, and int indicates that l2must
be an integer. These results indicate that crossover
reduces the length of the longer parent while increasing
the length of the shorter parent.
Equations 1 and 2 together imply that, under the as-
sumption of uniformly distributed identifying genes,
crossover primarily searches for genomes in the range
[n, m], where n<m. Therefore, it is expected that ini-
tial populations with wider ranges of lengths will be
able to converge more quickly to the optimal genome
length than initial populations with narrower ranges.
Is our assumption valid, though? Since populations
are typically generated randomly from uniform dis-
tributions, at least initially, the assumption is valid.
What about after the initial period? Well, SAGA
tells us that length changes suffer a doubled waiting
time such that length changes only occur in the initial
period, where in all likelihood, the uniform distribu-
tion has not changed considerably. Hence, the claim
of rapid convergence of initial populations with wide
length ranges holds.
Two experiments with EA’s utilizing exG’s were con-
ducted to determine exG feasibility. In each EA imple-
mentation, a (10+60) ES selection method was used.
Termination of EA’s occurred either when the number
of generations exceeded five-hundred or if the average
population fitness did not vary more than 0.0001 from
the previous generation. In this section results and
details of both experiments are presented.
6.1 Proof of Concept
The first experiment was a proof of concept experi-
ment. A simple problem was devised to see if exG’s
would actually converge to a target length. The initial
population was filled with genomes of varying length.
Fitness was calculated as the absolute difference be-
tween genome length and target length. 25 runs of the
EA were made for a variety of initial length ranges and
a target length of 36. Results of these runs are shown
in Table 1. The heading Generations lists the average
number of generations for convergence and σlists the
standard deviations. These results are promising, with
exG’s showing rapid convergence to the target length
over a variety of initial conditions.
Initial Range Generations σ
3–6 12.4 2.03
3–28 7.07 0.96
3–100 4.73 1.67
3–500 6.60 2.32
3–35 3.84 1.03
7–31 4.52 0.872
11–27 5.00 1.00
15–23 6.00 0.913
Table 1: Generations to convergence for different initial
The last four test cases further show statistically signif-
icant differences in convergence speed. These results,
along with the fact that they all have different initial
ranges, but the same initial average, corroborate the
claim that populations with initially wider ranges have
faster convergence than populations with initially nar-
rower ranges.
6.2 2-D Shape Matching
The second experiment tackles a more useful problem,
that of 2-D shape matching. The problem is stated as,
given a target shape, find the target in the space of all
2-D shapes. Details of the EA implementation follow.
6.2.1 Coding Scheme
The search space requires two parameters, so a two
string polar coordinate coding is taken. One string
contains the angles and the other the radii of the ver-
tices. The identifying string consists of the proportion
of each edge length to polygon perimeter. A square
whose (x,y) vertices are the permutations of (±1
then has the genome
angle : 45 135 225 315
radius : 100
identifier : .125 .375 .625 .875
Note that the identifiers are calculated from an initial
angle of zero (hence, the .125) and that all radii are
positive. Also, polygon perimeters are normalized to
length 100.
Canonical ES mutation operators are implemented
and the crossover described in regards to exG’s and
their identifying genes is used.
6.2.2 Initialization
The initial population members are initialized in the
following manner. First, the size, or number of vertices
n, of the shape is chosen uniformly from a preset range.
The number of out-of-order vertices, no, is randomly
chosen between 0 and n/3. Then, for every in-order
vertex, nnoof these, randomly generate vertices
with an angle between 0 and 360 degrees and a posi-
tive radius. The vertices are then sorted by angle. At
this point, the polygon is either convex or star-shaped.
For the remaining noout of order vertices, insert a new
vertex after a randomly chosen vertex, v, whose new
angle and radius are perturbations of v’s. Normaliza-
tion of the perimeter is then accomplished by dividing
each radius by the perimeter times 100. Finally, the
identifying string is calculated. While self-intersecting
polygons can be generated from this procedure, their
fitness values are poor. Consequently, no measures are
taken to enforce non-self-intersecting polygons.
6.2.3 Fitness
2-D shape matching requires a quantification of the
similarity of two shapes. We, thus, employ a ‘cluster-
like’ fitness function as described here. For each test
shape vertex, the closest target shape vertex, l,is
found. The distance between these vertices is added
to the fitness value (FV). If the closest target ver-
tices are unordered or repeated, say, for example, that
the first three l’s are the third, first, and first target
vertices, then a penalty proportional to the number
of unordered and repeated vertices is added to the
FV. Therefore, smaller FV’s denote higher similar-
ity between shapes. Also, from our results, we know
that FV’s below 100 indicate qualitatively equivalent
6.2.4 Results
2-D shape matching experiments were conducted with
a variety of target shapes, with all exhibiting similar
performance. Results of one run are shown in Fig-
ures 1–3. The EA is quite effective since convergence
of the test shape to the target shape occurs around the
60th iteration. Beyond this point, only small shape
changes occur, due to decreasing self-adaptive, muta-
tion rates.
Results averaged over 25 runs for four different ini-
tial size ranges are shown in Table 2. The heading
“Gens” indicates the number of iterations required for
length convergence and “Fitness” is the fitness value
of the final test shape. Several trends can be seen in
these results. First, convergence to the correct size
seems to be a valley-like function of initial range, with
smaller ranges containing the target size converging
more quickly. However, convergence to good fitness
values seems to be dependent on having a larger range
(or space) from which to search from initially. As in
the proof of concept experiment, though, it seems that
enlarging the range too much results in lower perfor-
Range Gens. σFitness σ
3–6 5.09 1.41 209.71 181.96
3–10 2.79 0.97 192.99 174.63
3–28 4.08 1.66 110.04 39.96
3–100 5.91 2.66 175.41 149.67
Table 2: Generations to length convergence and final fit-
ness values for different initial ranges.
Iteration: 1
Figure 1: Target Shape and Best Shape for Iteration 1
Iteration: 15 Iteration: 25
Figure 2: Best Shapes for Iterations 15 and 25
Iteration: 65 Iteration: 105
Figure 3: Best shapes at iterations 65 and 105.
exG’s and its operators are generalizable to most
other VLG’s. We show this by outlining how exG’s
can be modified to mimic other VLG’s; in particu-
lar, those found in messy GA’s (mGA’s) and Virtual
Virus (VIV). To conserve space, only the VLG’s are
described in each case. The interested reader is re-
ferred to [1] and [4] for background on VIV and mGA’s
7.1 Messy GA’s
The messy coding used in mGA’s consists of strings of
concatenated pairs of numbers. The first number is an
integer ‘name’, while the second is a parameter value.
The coding ((1 0)(3 1)), thus, denotes a genome with
zero in the first bit and one in the third. The second bit
is unspecified and is determined via a template. Bits
can also be overspecified, as in ((3 1)(3 0)). In this
case, the first occurrence is dominant; so, the third bit
is zero.
A new crossover operator is introduced that is a com-
bination of two simpler operators called splice and
cut. The splice operator simply concatenates two
genomes into one, whereas a cut operator splits a single
genome into two. Crossover is achieved by cutting two
genomes, then splicing across the genomes. Mutation
remains unchanged with bits randomly switching.
Messy coding can be achieved by exG’s if ranges of
identifying gene values are associated with ‘names’.
For example, say that an identifying gene in the
range [1,2) corresponds to the first bit, [2,3) to the
second, etc. We show how under and overspecifi-
cation are handled in this exG with the examples
used above. ((1 0)(3 1)) maps to something like
((1.1 0)(3.2 1)). Similarly, ((3 1)(3 0)) becomes some-
thing like ((3.1 1)(3.2 0)). Note that for exG’s, we
must maintain order based on identifying genes. This
fact precludes the use of the standard exG two point
crossover, since a cut such as ((4 0)(2 1)(5 1)(3 1)(1 1))
to ((4 0)(2 1)) and ((5 1)(3 1)(1 1)) is not possible.
The solution is to implement uniform crossover, where
every gene has its own crossover probability.
7.2 VIrtual Virus
Instead of binary or continuous valued strings, VIV
uses a four valued alphabet in its genomes. In this
case, the letters used are a,c,t,andg. Genomes
are allowed to have different lengths, leading to a new
crossover operator. Crossover occurs between homolo-
gous regions, or, in other words, blocks with the high-
est similarity are swapped during crossover. Crossover
lengths are the same for both parents, but positions
can vary. For example, the length 6 block starting from
position 4 in Parent
1below will be swapped with the
most similar length 6 block in Parent
2, which starts at
position 2. Mutation, again, remains unchanged with
genes randomly switching values.
VIV genomes can be mimicked by exG’s in the follow-
ing manner. First, assign each letter a numerical value,
say 1 for a,2forc,3fort,and4forg. Then, the iden-
tifying string is constructed by incrementing the previ-
ous gene value by the current letter’s value. For exam-
ple, act would be (1 3 6). Mutation would then simply
require updating the numerical values (e.g., mutation
of act to aat results in (1 2 5)). Crossover implements
a sliding/translating scheme. The Parent
1block is
translated to the beginning of each valid Parent
to determine the closest match. For example, say the
1block is (4 8 12), or ggg, and a length 3 block
in Parent
2consists of (10 14 19), or agg. Then, the
first block is translated so it starts from 10, leading to
(10 14 18). If this is the most similar block, translation
of the complete Parent
1genome by 6 allows standard
2 point exG crossover to be used in a equivalent fashion
as VIV crossover in its coding scheme.
7.3 Notes on exG Generality
The previous discussions show that exG’s are able to
replicate other VLG’s and their operators with slight
modifications. This is not surprising, as most VLG’s
have similar structures. In any event, it seems that
imitating VLG’s with other VLG’s requires modifica-
tions to the coding scheme or operators, which may or
may not be a trivial task.
A general variable length genome has been developed
to address the problems of fixed dimensionality repre-
sentations and application specific VLG’s. The newly
developed VLG is shown to be an extension of canon-
ical genomes; hence, the name extended genome, or
exG. Experimental results of EA’s using exG’s are
promising, as rapid convergence to optimal lengths is
exhibited. The results also are in line with the theoret-
ical discussions on SAGA and exG convergence. Fur-
thermore, the utility of exG’s in real EA applications is
shown in the 2-D shape matching experiments. Some
thoughts on how to modify exG’s into other VLG’s
are also presented in an attempt to show the general
applicability of exG’s.
[1] Burke, D., DeJong, K., Grefenstette, J.,
Ramsey, C., and Wu, A. Putting more genetics
into genetic algorithms. Evolutionary Computa-
tion 6, 4 (1998), 387–410.
[2] Dasgupta, D., and Michalewicz, Z.,Eds.
Evolutionary Algorithms in Engineering Applica-
tions. Springer, Berlin, Germany, 1997.
[3] Gen, M., and Cheng, R. Genetic Algorithms
and Engineering Design. John Wiley, New York,
[4] Goldberg,D.,Deb,K.,andKorb,B.Messy
genetic algorithms: Motivation, analysis, and first
results. Complex Systems 3 (1989), 493–530.
[5] Goldberg,D.E.Genetic Algorithms in Search,
Optimization, and Machine Learning. Addison-
Wesley Publishing Company, Inc., New York,
[6] Harvey, I. The SAGA cross: The mechanics of
crossover for variable-length genetic algorithms.
In Parallel Problem Solving from Nature 2 (Cam-
bridge, MA, 1992), Elsevier, pp. 269–278.
[7] Harvey, I. Species Adaptation Genetic Algo-
rithms: A basis for a continuing SAGA. In Pro-
ceedings of Fi rst European Conferen ce on Arti-
ficial Life (Cambridge, MA, 1992), MIT Press,
pp. 346–354.
[8] Holland, J. H. Adaptation in natural and arti-
ficial systems. The University of Michigan Press,
Ann Arbor, Michigan, 1975.
[9] Kauffman, S., and Levin, S. Towards a gen-
eral theory of adaptive walks on rugged land-
scapes. Journal of Theoretical Biology 128 (1987),
[10] Schwefel, H.-P. Evolution and Optimum Seek-
ing. John Wiley, New York, 1995.
[11] Winter, G., Periaux, J., Galan, M., and
Cuesta, P.,Eds. Genetic Algorithms in Engi-
neering and Computer Science. John Wiley, New
York, 1995.
... GEVO uses the patch-based representation for crossover, because combining two random program slices would require more extensive repair. GEVO implements one-point messy crossover, which combines shuffle [17] and variable-length [55] crossover operations. Figure 5 illustrates the process. ...
GPUs are a key enabler of the revolution in machine learning and high performance computing, functioning as de facto co-processors to accelerate large-scale computation. As the programming stack and tool support have matured, GPUs have also become accessible to programmers, who may lack detailed knowledge of the underlying architecture and fail to fully leverage the GPU's computation power. GEVO (Gpu optimization using EVOlutionary computation) is a tool for automatically discovering optimization opportunities and tuning the performance of GPU kernels in the LLVM representation. GEVO uses population-based search to find edits to GPU code compiled to LLVM-IR and improves performance on desired criteria while retaining required functionality. We demonstrate that GEVO improves the execution time of the GPU programs in the Rodinia benchmark suite and the machine learning models, SVM and ResNet18, on NVIDIA Tesla P100. For the Rodinia benchmarks, GEVO improves GPU kernel runtime performance by an average of 49.48% and by as much as 412% over the fully compiler-optimized baseline. If kernel output accuracy is relaxed to tolerate up to 1% error, GEVO can find kernel variants that outperform the baseline version by an average of 51.08%. For the machine learning workloads, GEVO achieves kernel performance improvement for SVM on the MNIST handwriting recognition (3.24X) and the a9a income prediction (2.93X) datasets with no loss of model accuracy. GEVO achieves 1.79X kernel performance improvement on image classification using ResNet18/CIFAR-10, with less than 1% model accuracy reduction.
... The variable length genome, addressing the problems of FLCs in canonical EAs, was proposed in [5]. The authors compared their plan with other variable length representations. ...
Full-text available
In the last decade, Evolutionary Algorithms (EAs) have been widely used to solve optimization problems in the real world. EAs are population-based algorithms, starting the search with initial set of candidates or chromosomes, for the optimal solution of a given optimization problem. Traditional EAs use a population with Fixed Length Chromosomes (FLCs). In FLCs, all the chromosomes will have same length, whereas, in Variable Length Chromosomes (VLCs), a population can have chromosomes of different lengths. This paper proposes to use VLCs in the context of Multi-Objective Differential Evolution (MODE) algorithm. The MODE with VLCs is to solve RFID reader placement problem for the buildings with multiple rooms of different sizes. The type of coverage of RFID readers considered is elliptical. Based on the dimensions of each room, the number of RFID readers required is varied, which warrants the deployment of VLCs. This paper also presents the consequence of VLCs, in solving the RFID reader placement problem using different weight vectors.
... Beyond the Gibbs updates, some of the more complex operators like swapping node conductivities (a reflection in hyperspace) find equivalents in mutation operators of genetic algorithms (interchanging, swap, or Twors mutation; see, e.g., Abdoun et al., 2012;Sivanandam & Deepa, 2007). Others like removing lenses or adding nodes (a removal or addition of hyperspace dimensions) are only rarely encountered in evolutionary algorithms, since most problem statements assume a fixed dimensionality of the parameter space (Lee & Antonsson, 2000). Formally incorporating hyperparameterization into the framework derived in section 2.3 could be achieved in two ways: (i) To entirely replace the parameters with hyperparameters, and interpret the field generator as a deterministic part of the numerical model (a hyperspace-based perspective), or (ii) to incorporate them as a latent logic underlying the artificial random dynamics in parameter space (a parameter space-based perspective). ...
Full-text available
Over the past decades, advances in data collection and machine learning have paved the way for the development of autonomous simulation frameworks. Among these, many are capable not only of assimilating real‐time data to correct their predictive shortcomings but also of improving their future performance through self‐optimization. In hydrogeology, such techniques harbor great potential for informing sustainable management practices. Simulating the intricacies of groundwater flow requires an adequate representation of unknown, often highly heterogeneous geology. Unfortunately, it is difficult to reconcile the structural complexity demanded by realistic geology with the simplifying assumptions introduced in many calibration methods. The particle filter framework would provide the necessary versatility to retain such complex information but suffers from the curse of dimensionality, a fundamental limitation discouraging its use in systems with many unknowns. Due to the prevalence of such systems in hydrogeology, the particle filter has received little attention in groundwater modeling so far. In this study, we explore the combined use of dimension‐reducing techniques and artificial parameter dynamics to enable a particle filter framework for a groundwater model. Exploiting freedom in the design of the dimension‐reduction approach, we ensure consistency with a predefined geological pattern. The performance of the resulting optimizer is demonstrated in a synthetic test case for three such geological configurations and compared to two Ensemble Kalman Filter setups. Favorable results even for deliberately misspecified settings make us hopeful that nested particle filters may constitute a useful tool for geologically consistent real‐time parameter optimization.
... He proposes using a recombination operator able to, given a crossover point in the primary genotype, choose a complementary crossover point in the other parent. For a different domain, in [22], the authors review the existing solutions using variable-length genomes and the effects of this codification in the algorithm convergence. Another example of this type of genomes applied to event planning can be found in [23], where the solution of a problem can be obtained applying a variable number of steps. ...
Full-text available
The problems related to traffic coordination in intersections are quite common in large cities. Current solutions are based on the utilization of static priorities (i.e. yield signs), on variable signaling like traffic lights, or even on the physical modification of the road structures by transforming intersections in roundabouts. The emergence, evolution, and consolidation of technologies that enable the paradigm of connected and autonomous vehicles have allowed the development of new solutions where the vehicles’ coordination follow a preset path without stopping when entering the intersections. In this work, we propose using a genetic algorithm with variable-length chromosomes to solve the vehicle coordination multipath problem in intersections. The proposed algorithm is focused on optimizing the vehicles’ arrival sequencing according to preset flow rates. While other solutions assume the same flow rates in every branch of the intersection, in our proposal the traffic flows can be asymmetric. We extend one of the existent intersection models, based on fixed paths, to allow multiple paths. This means that each vehicle can go from any input point to any output branch in the intersection. Moreover, we have designed specific selection, crossover and mutation operators, and a new methodology to carry out the crossover function between different sized individuals, which are adapted to the specific peculiarities of the problem. Our proposal has been validated by carrying out tests using input data with known solutions and with random data. The results have been compared with systems based on other optimizers, obtaining improved results in the fitness outcome up to 9.1%, and up to 126% in computation time.
... The idea behind this approach is to have a population of solutions undergo a selection based on a fitness function, to produce the next generation while performing mutations and combinations of the chromosomes of the individuals. This procedure is repeated until convergence, which in many instances will be at a global optimum [14]. ...
Conference Paper
Full-text available
The most widely used positioning system today is the GPS (Global Positioning System), which has many commercial, civil and military applications, being present in most smartphones. However, this system does not perform well in indoor locations, which poses a constraint for the positioning task on environments like shopping malls, office buildings, and other public places. In this context, WLAN positioning systems based on fingerprinting have attracted a lot of attention as a promising approach for indoor localization while using the existing infrastructure. This paper contributes to this field by presenting a methodology for developing WLAN positioning systems using genetic algorithms and neural networks. The fitness function of the genetic algorithm is based on the generalization capabilities of the network for test points that are not included in the training set. By using this approach, we have achieved state-of-the-art results with few parameters, and our method has shown to be less prone to overfitting than other techniques in the literature, showing better generalization in points that are not recorded on the radio map.
This article presents a method for multiobjective optimization of a complex system, modelling it as a collection of components and resource flows between them. Constraints can be imposed on a component basis or system-wide, based on the resource flows. Optimization is performed by a genetic algorithm utilizing a variable-length genome. This specialized genome enables a more open-ended design capability than previous similar frameworks. Systems are evaluated through a user-defined simulation, and results can be presented in any trade space of interest based on the performance in the simulation. The framework is then applied to the design of a table as a simple proof of concept. In this problem, the framework was found to identify a design within 4% of the theoretical optimum 80% of the time, and within 8% of the theoretical optimum the remaining 20% of the time.
The focus of this paper is variable length optimisation, which is a type of optimisation where the number of variables in the optimal solution is not known a priori. Due to the difference in solution space, traditional algorithms for fixed length problems either require significant adjustment, or cannot be applied at all. Furthermore, there is evidence that variable length algorithms - algorithms that consider solutions with different lengths throughout the optimisation process - may outperform fixed length algorithms on these problems. To investigate this, we have designed an abstract variable length problem that allows for straightforward and clear analysis. The performance of a number of evolutionary algorithms on this problem are analysed, including a fixed length algorithm and a state-of-the-art variable length algorithm. We propose a new mutation operator for variable length algorithms, and suggest potential directions for further research. Overall, the variable length algorithm with our mutation operator outperformed the state-of-the-art variable length algorithm, and the fixed length algorithm.
GPUs are a key enabler of the revolution in machine learning and high-performance computing, functioning as de facto co-processors to accelerate large-scale computation. As the programming stack and tool support have matured, GPUs have also become accessible to programmers, who may lack detailed knowledge of the underlying architecture and fail to fully leverage the GPU’s computation power. GEVO (Gpu optimization using EVOlutionary computation) is a tool for automatically discovering optimization opportunities and tuning the performance of GPU kernels in the LLVM representation. GEVO uses population-based search to find edits to GPU code compiled to LLVM-IR and improves performance on desired criteria while retaining required functionality. We demonstrate that GEVO improves the execution time of general-purpose GPU programs and machine learning (ML) models on NVIDIA Tesla P100. For the Rodinia benchmarks, GEVO improves GPU kernel runtime performance by an average of 49.48% and by as much as 412% over the fully compiler-optimized baseline. If kernel output accuracy is relaxed to tolerate up to 1% error, GEVO can find kernel variants that outperform the baseline by an average of 51.08%. For the ML workloads, GEVO achieves kernel performance improvement for SVM on the MNIST handwriting recognition (3.24×) and the a9a income prediction (2.93×) datasets with no loss of model accuracy. GEVO achieves 1.79× kernel performance improvement on image classification using ResNet18/CIFAR-10, with less than 1% model accuracy reduction.
We have recently developed OMNIREP, a coevolutionary algorithm to discover both a representation and an interpreter that solve a particular problem of interest. Herein, we demonstrate that the OMNIREP framework can be successfully applied within the field of evolutionary art. Specifically, we coevolve representations that encode image position, alongside interpreters that transform these positions into one of three pre-defined shapes (chunks, polygons, or circles) of varying size, shape, and color. We showcase a sampling of the unique image variations produced by this approach.
I Introduction.- Evolutionary Algorithms - An Overview.- Robust Encodings in Genetic Algorithms.- II Architecture and Civil Engineering.- Genetic Engineering and Design Problems.- The Generation of Form Using an Evolutionary Approach.- Evolutionary Optimization of Composite Structures.- Flaw Detection and Configuration with Genetic Algorithms.- A Genetic Algorithm Approach for River Management.- Hazards in Genetic Design Methodologies.- III Computer Science and Engineering.- The Identification and Characterization of Workload Classes.- Lossless and Lossy Data Compression.- Database Design with Genetic Algorithms.- Designing Multiprocessor Scheduling Algorithms Using a Distributed Genetic Algorithm System.- Prototype Based Supervised Concept Learning Using Genetic Algorithms.- Prototyping Intelligent Vehicle Modules Using Evolutionary Algorithms.- Gate-Level Evolvable Hardware: Empirical Study and Application.- Physical Design of VLSI Circuits and the Application of Genetic Algorithms.- Statistical Generalization of Performance-Related Heuristics for Knowledge-Lean Applications.- IV Electrical, Control and Signal Processing.- Optimal Scheduling of Thermal Power Generation Using Evolutionary Algorithms.- Genetic Algorithms and Genetic Programming for Control.- Global Structure Evolution and Local Parameter Learning for Control System Model Reductions.- Adaptive Recursive Filtering Using Evolutionary Algorithms.- Numerical Techniques for Efficient Sonar Bearing and Range Searching in the Near Field Using Genetic Algorithms.- Signal Design for Radar Imaging in Radar Astronomy: Genetic Optimization.- Evolutionary Algorithms in Target Acquisition and Sensor Fusion.- V Mechanical and Industrial Engineering.- Strategies for the Integration of Evolutionary/Adaptive Search with the Engineering Design Process.- Identification of Mechanical Inclusions.- GeneAS: A Robust Optimal Design Technique for Mechanical Component Design.- Genetic Algorithms for Optimal Cutting.- Practical Issues and Recent Advances in Job- and Open-Shop Scheduling.- The Key Steps to Achieve Mass Customization.
Introduction Cleveland and Smith's Approach Gupta, Gupta, and Kumar's Approach Lee and Kim's Approach Cheng and Gen's Approach
This paper defines and explores a somewhat different type of genetic algorithm (GA) - a messy genetic algorithm (mGA). Messy GAs process variable-length strings that may be either under- or over-specified with respect to the problem being solved. As nature has formed its genotypes by progressing from simple to more complex life forms, messy GAs solve problems by combining relatively short, well-tested building blocks to form longer, more complex strings that increasingly cover all features of a problem. This approach stands in contrast to the usual fixed-length, fixed-coding genetic algorithm, where the existence of the requisite tight linkage is taken for granted or ignored altogether. To compare the two approaches, a 30-bit, order-three-deceptive problem is searched using a simple GA and a messy GA. Using a random but fixed ordering of the bits, the simple GA makes errors at roughly three-quarters of its positions; under a worst-case ordering, the simple GA errs at all positions. In contrast to the simple GA results, the messy GA repeatedly solves the same problem to optimality. Prior to this time, no GA had ever solved a provably difficult problem to optimality without prior knowledge of good string arrangements. The mGA presented herein repeadedly achieves globally optimal results without such knowledge, and it does so at the very first generation in which strings are long enough to cover the problem. The solution of a difficult nonlinear problem to optimality suggests that messy GAs can solve more difficult problems than has been possible to date with other genetic algorithms. The ramifications of these techniques in search and machine learning are explored, including the possibility of messy floating-point codes, messy permutations, and messy classifiers.
David Goldberg's Genetic Algorithms in Search, Optimization and Machine Learning is by far the bestselling introduction to genetic algorithms. Goldberg is one of the preeminent researchers in the field--he has published over 100 research articles on genetic algorithms and is a student of John Holland, the father of genetic algorithms--and his deep understanding of the material shines through. The book contains a complete listing of a simple genetic algorithm in Pascal, which C programmers can easily understand. The book covers all of the important topics in the field, including crossover, mutation, classifier systems, and fitness scaling, giving a novice with a computer science background enough information to implement a genetic algorithm and describe genetic algorithms to a friend.
Adaptive evolution, to a large extent, is a complex combinatorial optimization process. In this article we take beginning steps towards developing a general theory of adaptive "walks" via fitter variants in such optimization processes. We introduce the basic idea of a space of entities, each a 1-mutant neighbor of many other entities in the space, and the idea of a fitness ascribed to each entity. Adaptive walks proceed from an initial entity, via fitter neighbors, to locally or globally optimal entities that are fitter than their neighbors. We develop a general theory for the number of local optima, lengths of adaptive walks, and the number of alternative local optima accessible from any given initial entity, for the baseline case of an uncorrelated fitness landscape. Most fitness landscapes are correlated, however. Therefore we develop parts of a universal theory of adaptation on correlated landscapes by adaptive processes that have sufficient numbers of mutations per individual to "jump beyond" the correlation lengths in the underlying landscape. In addition, we explore the statistical character of adaptive walks in two independent complex combinatorial optimization problems, that of evolving a specific cell type in model genetic networks, and that of finding good solutions to the traveling salesman problem. Surprisingly, both show similar statistical features, encouraging the hope that a general theory for adaptive walks on correlated and uncorrelated landscapes can be found. In the final section we explore two limits to the efficacy of selection. The first is new, and surprising: for a wide class of systems, as the complexity of the entities under selection increases, the local optima that are attainable fall progressively closer to the mean properties of the underlying space of entities. This may imply that complex biological systems, such as genetic regulatory systems, are "close" to the mean properties of the ensemble of genomic regulatory systems explored by evolution. The second limit shows that with increasing complexity and a fixed mutation rate, selection often becomes unable to pull an adapting population to those local optima to which connected adaptive walks via fitter variants exist. These beginning steps in theory development are applied to maturation of the immune response, and to the problem of radiation and stasis. Despite the limitations of the adaptive landscape metaphor, we believe that further development along the lines begun here will prove useful.