Threshold selecting: best possible probability distribution for crossover selection in genetic algorithms.
ABSTRACT The paper considers the problem of selecting individuals in the current population in Genetic Algorithms for crossover to find a solution of high fitness of a given combinatorial optimization problem. Many different schemes have been considered in literature as possible selection strategies, such as Windowing, Exponential reduction, Linear transformation or normalization and Binary Tournament selection. It is shown that if one wishes to maximize any linear function of the final state probabilities, e.g. the fitness of the best individual of the final population of the algorithm, then the best probability distribution for selecting individuals in each generation is a rectangular distribution over the individuals sorted by their fitness values. This means uniform probabilities have to be assigned to a group of the best individuals of the population but probabilities equal to zero to individuals with fitness ranks higher than a fixed cutoff, which is equal to a certain rank in the sorted fitness vector. The considered strategy is called Threshold Selecting. The proof applies basic arguments of Markov chains and linear optimization and makes only a few assumptions on the underlying principles and hence applies to a large class of Genetic Algorithms.
- [Show abstract] [Hide abstract]
ABSTRACT: Genetic algorithms are a widely studied area of research. This paper proposes an exhaustive analysis of recombi-nation and mutation schemes for genetic algorithms in a generic way. Besides an often one sided analysis of advantages and disad-vantages of algorithms, this paper tries to highlight all relevant influence for the election of a suitable algorithm. Intention is an independent inspection of influence characteristics. Entropy tests as well as convergence tests are accomplished and several other kinds of influence like population pool size are taken into account. To my best knowledge, there is no research papers that deal with the genetic algorithms at this abstract level. The main intention of this work is to provide theoretical background for further genetic design. A wide variety of analyzes are performed to build an adequate basis for comparison.01/2010; - SourceAvailable from: cs.allegheny.edu
Conference Paper: Empirically studying the role of selection operators duringsearch-based test suite prioritization.
[Show abstract] [Hide abstract]
ABSTRACT: Regression test suite prioritization techniques reorder test cases so that, on average, more faults will be revealed earlier in the test suite's execution than would otherwise be possible. This paper presents a genetic algorithm-based test prioritization method that employs a wide variety of mutation, crossover, selection, and transformation operators to reorder a test suite. Leveraging statistical analysis techniques, such as tree model construction through binary recursive partitioning and kernel density estimation, the paper's empirical results highlight the unique role that the selection operators play in identifying an effective ordering of a test suite. The study also reveals that, while truncation selection consistently outperformed the tournament and roulette operators in terms of test suite effectiveness, increasing selection pressure consistently produces the best results within each class of operator. After further explicating the relationship between selection intensity, termination condition, fitness landscape, and the quality of the resulting test suite, this paper demonstrates that the genetic algorithm-based prioritizer is superior to random search and hill climbing and thus suitable for many regression testing environments.Genetic and Evolutionary Computation Conference, GECCO 2010, Proceedings, Portland, Oregon, USA, July 7-11, 2010; 01/2010
Page 1
Threshold Selecting: Best Possible Probability Distribution
for Crossover Selection in Genetic Algorithms
Jörg Lässig
Chemnitz University of Techn.
Reichenhainer Str. 70,R.A209
D-09126 Chemnitz, Germany
joerg.laessig@cs.tu-
chemnitz.de
Karl Heinz Hoffmann
Chemnitz University of Techn.
Reichenhainer Str. 70, R. 356
D-09126 Chemnitz, Germany
hoffmann@physik.tu-
chemnitz.de
Mihaela En˘ achescu
Stanford University
Terman Eng. Center, R.323
Stanford, CA 94305-4036,
United States
mihaela@cs.stanford.edu
ABSTRACT
The paper considers the problem of selecting individuals in
the current population in Genetic Algorithms for crossover
to find a solution of high fitness of a given combinatorial
optimization problem.
Many different schemes have been considered in literature
as possible crossover selection strategies, such as Windowing,
Exponential reduction, Linear transformation or normaliza-
tion and Binary tournament selection.
It is shown that if one wishes to maximize any linear func-
tion of the final state probabilities, e.g. the fitness of the best
individual of the final population of the algorithm, then the
best probability distribution for selecting individuals in each
generation is a rectangular distribution over the individuals
sorted by their fitness values.
This means uniform probabilities have to be assigned to
a group of the best individuals of the population but proba-
bilities equal to zero to individuals with fitness ranks higher
than a fixed cutoff, which is equal to a certain rank in
the sorted fitness vector. The considered strategy is called
Threshold Selecting.
The proof applies basic arguments of Markov chains and
linear optimization and makes only a few assumptions on
the underlying principles and hence applies to a large class
of Genetic Algorithms.
Track: Genetic Algorithms.
Categories and Subject Descriptors
I.2.8 [Artificial Intelligence]: Problem Solving, Control
Methods, and Search—heuristic methods
General Terms
Algorithms, Theory
Keywords
Genetic Algorithms, Crossover Selection, Markov Process,
Master Equation, Threshold Selecting
Copyright is held by the author/owner(s).
GECCO’08, July 12–16, 2008, Atlanta, Georgia, USA.
ACM 978-1-60558-131-6/08/07.
1.INTRODUCTION
Designing a Genetic Algorithm (GA) for a certain given
problem, there are many degrees of freedom to be fixed but
often the choice of certain parameters or operators relies on
experimental tests and the experience of the programmer.
Such choices are e.g.:
• representation of a solution in the state space as an
artificial genome,
• choice of a crossover operator to form a new population
in each iteration,
• choice of a mutation rate,
• choice of a selection scheme over the individuals of a
population for crossover.
Today GAs are in broad practical application to problems
in many different fields as science, engineering or economics
(see e.g. [2, 7, 12, 14, 16, 18]) and excellent experimental
results have been obtained. Despite interesting theoretical
progress in the last years [3, 5, 6, 4, 19, 22] exact proves for
optimal choices of design criteria are still missing.
This paper focuses on the last of the design criteria above,
also called parent selection. In all variants of GAs some form
of the selection operator must be present [3]. A wide variety
of selection strategies have been proposed in the literature.
In general, m individuals of the current population of size
n have to be selected for crossover into a mating pool. In-
dividuals with higher fitness are more likely to receive more
than one copy and less fit individuals are more likely to re-
ceive no copies. In different replacement schemes the size
of the pool differs. After selecting the mating pool some
crossover scheme takes individuals from that pool and pro-
duces new outcome, until the pool is exhausted. Regarding
the crossover scheme no further restrictions are necessary
for our considerations concerning the optimal choice of a
selection strategy as discussed in the sections below.
The behavior of the GA very much depends on how in-
dividuals are chosen to go into the mating pool [20]. The
simplest approach is that the reproduction probability of
an individual of the population is proportional directly to
its fitness (roulette-wheel selection). Other approaches are
windowing, where first the fitness of the worst individual is
subtracted from each individual fitness, exponential, where
the square root of one plus the fitness is taken, linear trans-
formation, where a linear function of the fitness is computed,
e.g. f′= ̺ · f + ϕ, linear ranking selection, where a linear
function over a fitness ranking of the individuals is applied,
2181
Page 2
and binary tournament selection, where two individuals are
selected with uniform probability in a preselection and the
individual with better fitness is then submitted to the mat-
ing pool. See e.g. [3, 6, 20] for an overview on different
selection schemes.
Each of these choices is reported to have strengths and
weaknesses. The selection strategy has to be chosen such
that the population evolves towards ”better“ overall fitness.
For example, the fitness of the fittest individual in the final
population might be required to be as high as possible.
In the following it is proven that Threshold Selecting is op-
timal in a certain sense defined below. In Threshold Select-
ing the selection is based on fitness ranks, and the selection
probability on the ranks is rectangular, i.e. it includes one
or more individual(s) with the highest fitness value(s) with
the same non-vanishing probability but introduces a cutoff
rank γ so that all individuals with higher ranks are selected
with probability zero.
Table 1 [20] gives an example on the different methods
for a population of four individuals with exemplary fitness
values 50, 25, 15, 10.
Table 1: Comparison of different selection strategies
Rank of the individuals
Rough fitness
Roulette-wheel
Windowing
Exponential
Linear transformation (2f +1)
Linear ranking selection
Binary tournament selection
Threshold Selecting (γ = 3)
1234
50.0
0.5
0.6
0.365 0.261 0.205 0.169
0.495 0.25
0.40.3
0.438 0.312 0.188 0.062
0.30.3
25.0
0.25
0.25
15.0
0.15
0.083 0.0
10.0
0.1
0.152 0.103
0.20.1
0.30.0
2. TECHNIQUE
The proof technique is based on the fact that the selection
probability distributions assigning probabilities to n ordered
objects can be seen as vectors in a n dimensional space. Spe-
cial assumptions of the problem structure restrict the space
of possible solutions to a simplex. Then, due do the linearity
of applicable objective functions on the selection probabili-
ties, the problem reduces to the task to find the minimum of
a linear function on a simplex, which must be a vertex in the
general case by the fundamental theorem of linear program-
ming. As will be shown, the vertices are exactly equivalent
to the rectangular distributions mentioned above.
This technique has been already applied to show for the
acceptance rule in Monte Carlo methods as Simulated An-
nealing [17], Threshold Accepting [8] or Tsallis Statistics [9,
10, 23] that Threshold Accepting is provably a best possible
choice [11].
Further the stochastic optimization algorithm Extremal
Optimization [1] has been investigated [13, 15]. Extremal
Optimization also works by simulating random walkers as
the methods mentioned before, but needs a special structure
of the problem under consideration: every state is specified
by several degrees of freedom, each of which can be assigned
a fitness. Each iteration chooses one degree of freedom for
change based on fitness values. It has been shown that a
rectangular distribution is the best choice in each iteration
of Extremal Optimization.
3.DEFINITIONS
We consider combinatorial optimization problems with a
finite state space Ω of states α ∈ Ω, which are the possible
solutions for the problem. A fitness function f(α) describes
the quality of the solution α and has to be maximized, i.e.
the states with a higher fitness are better. Note that there
is only a finite number of possible values for f(α).
GAs consider populations (or pools) of states. If there are
n states in a population, then each generation of the GA
is equivalent to a generalized state α := (α1,α2,...,αn) ∈
Ωn= Ω with n finite. A generalized fitness function f(α)
has to be defined as well, which is usually done by f(α) :=
max{f(αi) | i = 1,2,...,n}.
To obtain good solutions GAs proceed by randomly select-
ing a start population, and then evolving it by a selection
and subsequent crossover operation. Mutations are also pos-
sible, but are of no importance here. We here confine our-
selves to selection steps, where the probability to enter the
mating pool is based on the fitness ranks of the population
members. The possible mating pools are again described by
generalized states ¯ γ := (γ1,γ2,...,γm), albeit not of size
n but of size m. The bar notation is used to differentiate
between the population and the mating pool.
For the choice of the m individuals for the crossover step in
the GA, m time dependent probability distributions di,t(k),
i = 1,2,...,m are defined over the ranks k.
structure at time t exactly m ranks kl1,kl2,...,klmare cho-
sen by the GA and hence m individuals from the current
population according to di,t,i = 1,2,...,m. Technically,
each of the individual members βi of the current population
β is assigned a rank ki based on its fitness: The individuals
of a population can be ordered according to their fitness in
a ranking ki ∈ N⋆
n= {1,2,...,n}:
Given this
ki ≤ kj ⇐⇒ f(αi) ≥ f(αj) ∀ pairs (i,j) .
The following assumptions are adopted for the selection
probabilities di,t(k):
(A1) Each step of the algorithm is independent of the former
steps.
(A2) In each step t, 1 ≥ di,t(1) ≥ di,t(2) ≥ ··· ≥ di,t(n) ≥ 0
holds for i = 1,2,...,m, i.e. it is more probable to
recombine individuals with lower rank (higher fitness)
than individuals with a higher rank (lower fitness).
(A3)Pn
k=1di,t(k) = 1 for i = 1,2,...m, i.e. the distribu-
tions are normalized.
Due to the random nature of the selection process there
is a transition probability
ΛS
¯ γβ= d1,t(kl1)d2,t(kl2)···dm,t(klm)(1)
to obtain the mating pool ¯ γ = (βl1,βl2,...,βlm) from the
population β = (β1,β2,...,βn).
In the crossover step an operator C¯ γ is applied to the cur-
rent population β. The operator C¯ γ is not deterministic
but determines the fixed probabilities ΛC
population α ∈ Ω from β ∈ Ω and ¯ γ as intermediate step.
For each fixed pair ¯ γ and β we have
α¯ γβto obtain a new
X
α∈Ω
ΛC
α¯ γβ= 1.
The dependence on both, ¯ γ and β, can be explained by
the fact that not each crossover operator creates the new
2182
Page 3
population α by solely utilizing the mating pool ¯ γ, which
is only the case for the Generation Replacement Model. In
the Steady-State Replacement Model it is also possible that
other states βi from the current population β are kept.
An exemplary procedure could work as follows: After get-
ting a mating pool ¯ γ of m states m new states are created
by recombination from ¯ γ. In addition to the current n states
then there are n + m states available and n states are kept
for the new generation α applying some standard procedure
(e.g. keep the best n of all n+m states). In the special case
of generation replacement we have n = m and β is replaced
completely by the states from recombination.
Remark 1. Think of the recombination itself that states
from the mating pool ¯ γ are taken one after another. Each
possible tuple of states for one crossover operation is chosen
with the same probability, i.e. probability is uniform dis-
tributed among all possible tuples of desired size in ¯ γ. Most
commonly pairs are chosen and for each pair a split posi-
tion for one point crossover or more than one split position
for multi point crossover procedures are determined - again
uniform distributed (the proof is general enough that also
other distributions or procedures are possible here).
Combining selection and crossover leads to a transition
probability Γt
αβfrom one population β to the next popula-
tion α.
In summary the dynamics of GAs can be described as a
Markovian random walk in state space. For the develop-
ment of the probability pt
αto be in state α (which means to
have a certain population in the GA) the master equation
pt
α=
X
β∈Ω
Γt
αβ· pt−1
β
(2)
is applicable. Here Γt
αβis defined to be
Γt
αβ=
X
¯ γ∈¯Ω
ΛC
α¯ γβΛS
¯ γβ. =
X
¯ γ∈¯Ω
ΛC
α¯ γβ
m
Y
i=1
di,t. (3)
In the next step the dependence of the performance of
the GA on the probability distributions d1,t,d2,t,...,dm,t
over the ranks in the population is investigated and which
choice is an optimal one for these distributions considering
an optimization run with S steps.
Most commonly one of the following objectives is used [11]
(here slightly adapted in the notation for GAs):
(O1) The mean of the fitness of the best individual in the
final population should be as large as possible.
(O2) The probability of having a final population contain-
ing a member of optimal fitness should be as large as
possible.
To optimize according to (O1) one chooses
g1(α) = f(α) = max{f(αi) | i = 1,2,...,n}
which means essentially that the quality of a population is
assumed to be equivalent to the quality of the best individual
in the population, and to optimize according to (O2) one
chooses
g2(α)=
1
0
| if α contains a state with fitness fmax
| otherwise,
i.e.
values different from zero. Other objectives are possible and
the only important fact for the proof is that they are linear
in the final state probabilities, as we will see.
The optimization process consists of a finite number of S
steps (t := 1,2,...,S). Note, that Γt
for i fixed. The arguments below apply in general to any
objective function which is linear in the final state probabil-
ities pS
αas e.g. (O1) and (O2). The state probabilities at
time t are considered as vector ptand the linear objective
function with values g(α) for each state α as vector g. If
(·)trdenotes the transpose, the measure of performance is
equivalent to
only optimal states with fitness fmax have objective
αβis linear in di,t(k)
g(pS) = gtr· pS=
X
α∈Ω
g(α) · pS
α−→ max .(4)
4.SETUP OF A VECTOR SPACE
In the following the distributions di,t(k),k = 1,2,...,n,
are considered to be n dimensional vectors di,twith entries
di,t
k
∈ [0,1]. Consider without loss of generality n − 1 of
these distributions di,t,i ∈ 1,2,...,n to be fixed. Only one
remaining distribution denoted by dr,tis open to optimize.
The question is then how to choose dr,tto maximize the
objective function. As a consequence of the assumptions
(A2) and (A3), the region F of feasible vectors dr,tis de-
fined by the n + 1 linear inequations in (A2) and one linear
equation in (A3) where the first inequation 1 ≥ dr,t
from the others. Of the remaining n inequations n−1 must
be set to equations to find extreme points (vertices) in the
region F. Letting V denote the set of extreme points of
F, the elements of V are exactly those vectors dr,twhich
have the initial sequence of i entries equal to 1/i followed
by a sequence of n − i entries equal to zeros. Explicitly,
V = {v1,v2,...,vn}, where v1 = (1,0,0,...,0)tr, v2 =
(1/2,1/2,0,0,...,0)tr, vi = (1/i,1/i,...,1/i,0,0,...,0)tr,
and vn = (1/n,1/n,...,1/n)tr. Note that the elements of
V are linearly independent. Then F is exactly the convex
hull C(V ) of V , which is a simplex.
This equivalence of F and C(V ) can be shown by standard
calculations, see e.g. [13] as reference.
1
follows
5.PROOF
Now the Bellman principle of dynamic programming is
applied, starting with the last step in the optimization pro-
cess. The output of the last step is pSand used to determine
the optimality criterion (4). In the last step S one has to
solve the optimization problem (4) for the given input pS−1.
Using (2) one gets
g(pS) =
X
α,β∈Ω
g(α) · ΓS
αβ· pS−1
β
−→ max
with ΓS
function on a simplex has to be found. To do so choose the
distribution dr,S, which selects one of the m individuals for
crossover, equal to one of the vertices vi ∈ V . The corre-
sponding transition probabilities are denoted by ΓS, because
then all m distributions are fixed in this stage. Considering
now the step before, i.e. step S − 1, one gets
αβgiven by (3). Hence, the maximum of a linear
gtr· pS= (gtr· ΓS) ·
0
@
X
α,β∈Ω
ΓS−1
αβ · pS−2
β
1
A−→ max .
2183
Page 4
Defining gS−1= gtr·ΓSas new objective function the same
arguments can be applied to choose dr,S−1. The resulting
matrix is then denoted with ΓS−1where the optimal transi-
tion probabilities are again found by taking dr,S−1to be an
element of V . For all other steps the same argument holds
as well, i.e. dr,t,t = 1,2,...,S, are all elements of the vertex
set V .
Because an arbitrary distribution dr,thas been chosen for
the proof to be variable, the same arguments hold for all
distributions dr,t,r = 1,2,...,n as well. Hence, the proof
shows that a rectangular distribution over the fittest indi-
viduals in each generation in the iterations t = 1,2,...,S
in GAs gives the best implementation of the selection step
for each individual used for the crossover step in iteration t.
For the mutation operator the same as for crossover holds,
because this is an operator with equivalent characteristics
with regard to our proof but only one input state.
6. CONCLUSIONS
In this paper the problem of selecting individuals from the
population of a GA for crossover based on a fitness function
has been considered. The master equation was the means of
choice to describe the corresponding dynamics as a random
walk in state space and some straightforward assumptions
on the probability distributions for selecting the individuals
in a certain generation have been formulated.
The goal was to find transition probabilities assuring the
optimum control of the evolutionary development in the GA.
A rectangular distribution of selection probabilities is prov-
ably optimal, provided the performance is measured by a lin-
ear function in the state probabilities, which includes many
reasonable choices as for instance maximizing the mean fit-
ness of the best individual in the final population.
The proof above is based on the fundamental theorem of
linear programming, which states that a linear function de-
fined on a simplex assumes its minimum at a vertex. The
proof does not state that all optimal crossover selection
strategies in GAs are rectangular. Other strategies may do
equally well, but not better. If there exists an optimal strat-
egy other than Threshold Selecting, it follows that an edge
or a face of the described simplex does equally well. Thus,
it seems unlikely that a strictly monotonic distribution can
be optimal [11], which would imply that all the vertices in
V do equally well.
As presented the proof can be applied for any crossover
procedure in GAs with independent probability distributions
for the selection of the crossover individuals and both for the
Generation Replacement Model, where the mating pool has
size n for populations of size n, and also for the Steady-
State Replacement Model, where only some individuals are
replaced [21].
Currently the knowledge that best performance can be
achieved using Threshold Selecting is only of limited use,
since the cutoff ranks γ to be used are not known a priori.
Therefore it would be interesting to perform numerical ex-
periments comparing different possible distributions empiri-
cally. Further, it is reasonable to introduce a schedule on the
cutoff rank γ, narrowing the rectangular distribution during
the optimization process and thus increasing the evolution-
ary pressure gradually. Moreover, it would be interesting
to obtain also theoretical progress concerning the choice of
one of the possible rectangular distributions or to reduce the
choice to a certain assortment.
Our proof was based on the assumption that the objective
measuring the performance of the GA is a linear function of
the state probabilities. While this includes very common
measures, it does not include them all.
As an example, the best-so-far fitness over individuals of
all previous generations as a measure is beyond the scope of
the proof presented here. From a practical point of view this
can be fixed easily, adding one individual to the population
and adapting the crossover operator in a way to keep the
best individual in each iteration, if not one with better or
equal fitness is found.
Clearly this adaption is possible for each given crossover
operator C to obtain an operator C′. Applying C′the ob-
jective value of the individual with best fitness in the final
population is equivalent to the best-so-far fitness but the dis-
tributions are as well rectangular because the proof applies
as before.
Further, the proof above had to assume a finite state
space. The exploration of continuous state spaces would be
interesting as well, but considering the discrete arithmetic
of digital computers any state space in practice is effectively
finite [13].
Finally, the arguments presented here establish the struc-
ture of a provably optimal strategy which could be applied
to study also other heuristic approaches to global optimiza-
tion.
7. ACKNOWLEDGMENTS
The authors would like to thank the German Research
Foundation (DFG) for partially funding this research.
8. REFERENCES
[1] S. Boettcher and A. G. Percus. Extremal
Optimization: Methods derived from Co-Evolution. In
GECCO-99, Proceedings of the Genetic and
Evolutionary Computation Conference, pages 825–832,
Orlando, Florida, July 1999.
[2] P. G. Busacca, M. Marseguerra, and E. Zio.
Multiobjective Optimization by Genetic Algorithms:
Application to Safety Systems. Reliability Engineering
& System Safety, 72(1):59–74, April 2001.
[3] M. Chakraborty and U. K. Chakraborty. An Analysis
of Linear Ranking and Binary Tournament Selection
in Genetic Algorithms. In Proceedings of the
International Conference on Information,
Communications and Signal Processing, pages
407–411, Singapore, September 1997.
[4] U. K. Chakraborty. A Simpler Derivation of Schema
Hazard in Genetic Algorithms. Information Processing
Letters, 56(2):77–78, 1995.
[5] U. K. Chakraborty. Abranching Process Model for
Genetic Algorithms. Information Processing Letters,
56(5):281–292, December 1995.
[6] U. K. Chakraborty, K. Deb, and M. Chakraborty.
Analysis of Selection Algorithms: A Markov Chain
Approach. Evolutionary Computation, 4(2):133–167,
1997.
[7] L. D. Chambers. The Practical Handbook of Genetic
Algorithms: Applications. CRC Press Inc., 2000.
[8] G. Dueck and T. Scheuer. Threshold Accepting: A
General Purpose Optimization Algorithm Appearing
2184
Page 5
Superior to Simulated Annealing. Journal of
Computational Physics, 90:161–175, 1990.
[9] A. Franz and K. H. Hoffmann. Optimal Annealing
Schedules for a Modified Tsallis Statistics. Jounal of
Computational Physics, 176(1):196–204, Feb. 2002.
[10] A. Franz and K. H. Hoffmann. Threshold Accepting as
Limit Case for a Modified Tsallis Statistics. Applied
Mathematics Letters, 16(1):27–31, January 2003.
[11] A. Franz, K. H. Hoffmann, and P. Salamon. Best
Possible Strategy for Finding Ground States. Physical
Review Letters, 86(23):5219–5222, June 2001.
[12] P. Godefroid and S. Khurshid. Exploring Very Large
State Spaces Using Genetic Algorithms. In J.-P.
Katoen and P. Stevens, editors, LNCS 2280, pages
266–280. Springer-Verlag Berlin Heidelberg, 2002.
[13] F. Heilmann, K. H. Hoffmann, and P. Salamon. Best
Possible Probability Distribution over Extremal
Optimization Ranks. Europhysics Letters,
66(3):305–310, March 2004.
[14] B. Hemmateenejad, M. Akhond, R. Miri, and
M. Shamsipur. Genetic Algorithm Applied to the
Selection of Factors in Principal Component-Artificial
Neural Networks: Application to QSAR Study of
Calcium Channel Antagonist Activity of
1,4-Dihydropyridines (Nifedipine Analogous). Journal
of Chemical Information and Modeling, 43(4):1328
–1334, June 2003.
[15] K. H. Hoffmann, F. Heilmann, and P. Salamon.
Fitness Threshold Accepting over Extremal
Optimization Ranks. Physical Review E, 70(4):046704,
October 2004.
[16] S. Kikuchi, D. Tominaga, M. Arita, K. Takahashi, and
M. Tomita. Dynamic Modeling of Genetic Networks
Using Genetic Algorithm and S-System.
Bioinformatics, 19(5):643–650, 2003.
[17] S. Kirkpatrick, C. Galatt, and M. Vecchi.
Optimization by simulated annealing. Science, 4598,
1983.
[18] A. Kolen. A Genetic Algorithm for the Partial Binary
Constraint Satisfaction Problem: an Application to a
Frequency Assignment Problem. Statistica
Neerlandica, 61(1):4–15, February 2007.
[19] H. M¨ uhlenbein and D. Schlierkamp-Voosen. The
Science of Breeding and its Application to the Breeder
Genetic Algorithm (BGA). Evolutionary Computation,
1(4):335–360, 1993.
[20] J.-P. Rennard. Introduction to Genetic Algorithms.
http://www.rennard.org/alife/english/gavgb.pdf,
2000. [Online; accessed 29-February-2008].
[21] D. Srinivasan and L. Rachmawati. An Efficient
Multi-objective Evolutionary Algorithm with
Steady-State Replacement Model. In Proceedings of
the 8th annual conference on Genetic and evolutionary
computation, pages 715–722, Seattle, Washington,
USA, July 2006.
[22] C. R. Stephens, M. Toussaint, D. Whitley, and P. F.
Stadler, editors. Foundations of Genetic Algorithms:
9th International Workshop, FOGA 2007, Mexico
City, Mexico, 2007. Springer Berlin.
[23] C. Tsallis and D. A. Stariolo. Generalized Simulated
Annealing. Physica A, 233:395–406, 1996.
2185
Similar Publications
Joerg Laessig |