ArticlePDF Available

Abstract

In this paper, we present a review on non-gradient optimization methods with applications to structural engineering. Due to their versatility, there is a large use of heuristic methods of optimization in structural engineering. However, heuristic methods do not guarantee convergence to (locally) optimal solutions. As such, recently, there has been an increasing use of derivative-free optimization techniques that guarantee optimality. For each method, we provide a pseudo code and list of references with structural engineering applications. Strengths and limitations of each technique are discussed. We conclude with some remarks on the value of using methods customized for a desired application.
This article appeared in a journal published by Elsevier. The attached
copy is furnished to the author for internal non-commercial research
and education use, including for instruction at the authors institution
and sharing with colleagues.
Other uses, including reproduction and distribution, or selling or
licensing copies, or posting to personal, institutional or third party
websites are prohibited.
In most cases authors are permitted to post their version of the
article (e.g. in Word or Tex form) to their personal website or
institutional repository. Authors requiring further information
regarding Elsevier’s archiving and manuscript policies are
encouraged to visit:
http://www.elsevier.com/authorsrights
Author's personal copy
A survey of non-gradient optimization methods in structural engineering
Warren Hare
a
, Julie Nutini
b
, Solomon Tesfamariam
c,
a
Mathematics, University of British Columbia, Kelowna, BC, Canada
b
Computer Science, University of British Columbia, Vancouver, BC, Canada
c
School of Engineering, University of British Columbia, Kelowna, BC, Canada
article info
Article history:
Received 8 November 2012
Received in revised form 11 March 2013
Accepted 11 March 2013
Keywords:
Optimization
Structural engineering
Non-gradient methods
Heuristic methods
Swarm methods
Derivative-free optimization
abstract
In this paper, we present a review on non-gradient optimization methods with applications to structural
engineering. Due to their versatility, there is a large use of heuristic methods of optimization in structural
engineering. However, heuristic methods do not guarantee convergence to (locally) optimal solutions. As
such, recently, there has been an increasing use of derivative-free optimization techniques that guarantee
optimality. For each method, we provide a pseudo code and list of references with structural engineering
applications. Strengths and limitations of each technique are discussed. We conclude with some remarks
on the value of using methods customized for a desired application.
Ó2013 Elsevier Ltd. All rights reserved.
1. Introduction
Optimization is the process of minimizing or maximizing an
objective function (e.g. cost, weight). Three main types of optimi-
zation problems that arise in structural engineering are [1,2]: siz-
ing optimization, shape optimization, and topology optimization.
Sizing optimization entails determining the member area of each
element. Shape optimization entails optimizing the profile/shape
of the structure. Topology optimization is associated with connec-
tivity of structural elements. Traditionally, the three optimization
problems were solved independently (e.g., [3]), however, recent
trend shows simultaneous optimization of sizing, shape and topol-
ogy provides better results [2,4].
In this paper, we consider optimization problems of the form
minimize
x
fðxÞ
subject to cðxÞ60
l6x6u;
ð1Þ
where f:R
n
!R;cðxÞ¼ðc
1
ðxÞ;...;c
m
ðxÞÞ and 6should be inter-
preted coordinate-wise. We permit l
j
,u
j
1,j2{1, ...,n}, to allow
for the possibility of unbounded variables.
If the objective function of an optimization problem is
smooth (i.e., differentiable) and gradient information is reliable,
then gradient based optimization algorithms present an extre-
mely powerful collection of tools for solving the problem. How-
ever, in some structural engineering problems, such as when
simulations are employed to imitate problem conditions, gradi-
ent information may not be available for the problem. Even if
gradient information is available, it can be unreliable or difficult
to compute. Thus, non-gradient methods are incredibly useful
optimization tools.
As the name suggests, non-gradient methods do not require
gradient information to converge to a solution. Rather, these meth-
ods solely use function evaluations of the objective function to
converge to a solution. We note that if gradient information is
available for a well-behaved problem, then a gradient based meth-
od should be used. However, when gradient information is not
available, non-gradient methods are practical alternatives. Several
reviews of non-gradient methods for optimization problems in
structural engineering have been published. The majority of these
focus on heuristic methods. In 1991, a review of genetic algorithms
for structural optimization was published [5]. In 2002, a more gen-
eral review of evolutionary algorithms for structural optimization
was published [6]. In 2007, a review focused on the design of steel
frames via stochastic search methods was published [7]. In 2008, a
review on the use of simulated annealing methods for structural
optimization [8] and a general review on publications of structural
engineering applications using particle swarm optimization [9]
were published. In 2009, a review on the use of the harmony
search methods in structural design optimization was published
0965-9978/$ - see front matter Ó2013 Elsevier Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.advengsoft.2013.03.001
Corresponding author. Tel.: +1 250 807 8185.
E-mail address: Solomon.Tesfamariam@ubc.ca (S. Tesfamariam).
Advances in Engineering Software 59 (2013) 19–28
Contents lists available at SciVerse ScienceDirect
Advances in Engineering Software
journal homepage: www.elsevier.com/locate/advengsoft
Author's personal copy
[10]. In 2011, a review focused on the design of skeletal structures
using a variety of heuristic techniques was published [11]. Most re-
cently, in 2012, a comprehensive review of stochastic search heu-
ristics was published [12].
In this paper, we present a detailed review of the non-gradient
methods for structural optimization. We also provide a list of ref-
erences that utilize the optimization methods. We include the
methods in the review papers previously mentioned, as well as
several other methods. We also include the more recent Deriva-
tive-free Optimization (DFO) methods that have become increas-
ingly popular in optimization applications. Unlike the general
category of non-gradient methods, DFO methods are supported
by mathematical convergence theories, which ensures that the
algorithms converge to a local minimizer of the objective function.
Due to their practical utility and the numerous problems suited
to them, new non-gradient algorithms are frequently developed;
for example, the very recent magnetic charged system search
[13], which adapts the (also recent) charged system search [14].
In this paper, we choose to focus on methods that appear fre-
quently in the literature. Thus, we exclude some recently proposed
methods. However, we emphasize that new non-gradient methods
are regularly providing improved solutions to many structural
engineering problems.
The remainder of this paper is organized as follows. In Section 2,
we present some of the most popular heuristic methods used in
structural engineering: evolutionary algorithms. These methods
use techniques that imitate natural evolution. In Section 3,we
present some heuristic methods inspired by physical processes
and the nature of stochastic processes. In Section 4, we present
heuristic methods inspired by self-organizing systems. These
methods are often referred to as swarm algorithms, as they are of-
ten inspired by how animal swarms employ simple rules to devel-
op favorable system behavior. In Section 5, we present formalized
methods that are strengthened by mathematical convergence the-
ory. We refer to these methods as Derivative-free Optimization, as
is commonly used in the mathematical community. Each of these
sections is broken into several subsections that describe examples
of specific algorithms. In Section 6, we consider our observations
from the previous sections, present our conclusions and provide
a summary table of the methods discussed.
2. Evolutionary algorithms
Evolutionary algorithms are a class of non-gradient population-
based algorithms used in many areas of engineering optimization.
These methods use techniques that imitate natural evolution. They
follow the four general steps of reproduction, mutation, recombi-
nation and selection and use a fitness function to determine the
conditions that support survival.
2.1. Genetic algorithm
A Genetic Algorithm (GA) is probably the most commonly used
evolutionary algorithm and one of the more common non-gradient
methods. These methods were originally proposed by John Holland
in 1975 [15]. A GA selects an initial population of potential solu-
tions, say P(t), for each iteration tto the problem at hand. Using
stochastic transformations, some solutions will undergo a muta-
tion or crossover step. These new potential solutions are referred
to as the offspring, say C(t). From both P(t) and C(t), the ‘most fit’
solutions (solutions with the better objective values) are selected
to form a new population, P(t+ 1). After the evaluation of several
generations, the algorithm hopefully converges to the optimal or
sub-optimal solution of the objective function. Generally, the
structure of the GA is as follows [16]:
procedure GeneticAlgorithm
begin
Initialize and evaluate P(t);
while (not termination condition) do
begin
Recombine P(t) to yield C(t);
Evaluate C(t);
Select P(t+ 1) from P(t) and C(t);
end
end
GAs have been applied to numerous structural engineering
applications: structural reliability [17,18], bridge design, structure,
maintenance and repair [19–21], design of welded steel plate gir-
der bridges [22], seismic zoning [23], seismic design of lifeline sys-
tems [24], truss structure optimization [25–27], the size, shape and
topology of skeletal structures [28] and design optimization of
steel structures [29–32], reinforced concrete flat slab buildings
[33], steel telecommunication poles [34] and viscous dampers
[35]. See also [36–49,31,50–58].
2.2. Evolutionary strategies
Evolutionary strategies (ES) are a subclass of evolutionary algo-
rithms. The structure of these methods was originally developed
by Rechenberg and Schwefel ([59–62]). To start, an ES defines an
initial parent population of size
l
that consists of potential solu-
tions, say B
ð0Þ
p
, to the problem at hand. Each individual a
k
in B
ð0Þ
p
is
comprised of a parameter set y
k
, its objective value F
k
:¼F(y
k
)
and an evolvable set of strategy parameters s
k
. Next, the parent
population reproduces, generating koffspring, where kis a fixed
parameter of the method. To do this, first there is a marriage step,
where one family Cof size
q
is randomly chosen from the parent
population at time t;B
ðtÞ
p
. Then for the individuals in family C, their
strategy and object parameters are recombined. These new param-
eters are then mutated, forming the offspring population at time
t;B
ðtÞ
o
. Finally, the selection step forms a new parent population
B
ðgþ1Þ
p
. There are two main types of ESs for different numbers of par-
ents and offspring, namely (
l
+k)-ES and (
l
,k)-ES. Generally, the
structure of the ES is as follows ([63]):
procedure EvoluntionStrategy
begin
Initialize parent population B
ð0Þ
p
;
while (not termination condition) do
for n=1:kdo begin
C
n
= MarriageðB
ðtÞ
p
;
q
Þ;
s
n
= s_recombination (C
n
);
y
n
= y_recombination (C
n
);
~
s
n
¼s_mutation (s
n
);
~
y
n
¼y_mutation ðy
n
;~
s
n
Þ;
end
Update B
ðtÞ
o
;
Perform selection and update parent population B
ðtÞ
p
;
end
end
Evolutionary strategies appear in the literature often and have
been applied to several structural engineering problems: optimiz-
ing truss structures ([64–66]), optimizing a connection rod shape
and minimizing the volume of a square plate with a central cut-
out ([67]), and the design of a cantilever beam ([68]), steel frames
([30]) and cylindrical shells ([69]).
20 W. Hare et al. / Advances in Engineering Software 59 (2013) 19–28
Author's personal copy
2.3. Strengths and limitations of evolutionary algorithms
As a class of heuristic algorithms, there is no mathematical con-
vergence theory for evolutionary algorithms – and thus no assur-
ance of optimality of the final solution found. However, in
practice, evolutionary algorithms can be very successful in finding
good solutions quickly. The number of function evaluations evolu-
tionary algorithms require can be scaled independent of the
dimension of the problem, making them versatile for large scale
problems. However, do to the need to maintain large populations
of candidate solutions, evolutionary algorithms can be cumber-
some for small scale problems. Most papers that compare the per-
formance of non-gradient methods on one application include at
least one comparison to an evolutionary algorithm (see Table 1).
evolutionary algorithms are often considered a baseline approach,
and the method to beat if you wish to claim a new algorithm is of
high quality.
3. Physical algorithms
3.1. Harmony search
The Harmony Search (HS) algorithm was first introduced by
Geem, Kim and Loganathan in 2001 [70]. As the name suggests, this
algorithm mimics the evolution of a harmony relationship between
several sound waves of differing frequencies when played simulta-
neously. In music, a best state (aesthetically pleasing harmony) is
desired; in optimization, the ‘best state’ is achieved at the global
optimum. The processes of random selection, memory consider-
ation and pitch adjustment are all incorporated in this algorithm.
There are two main parameters used in the HS algorithm: the Har-
mony Memory (HM) accepting rate, denoted by r
accept
, and the
pitch adjustment rate, denoted by r
pa
. As their names suggest,
r
accept
is the rate at which a new harmony is accepted into the
HM, and r
pa
controls the degree to which the pitch can be adjusted.
The basic structure of an HS algorithm is as follows [71]:
procedure HarmonySearch
begin
Initialize parameters, including r
accept
and r
pa
;
Generate initial HM with random harmonies;
while t< max number of iterations
while i6number of variables
if random value rand <r
accept
Choose value from HM for the variable i;
if rand <r
pa
Adjust the value by adding certain amount;
end if
else
Choose a new rand value;
end if
end while
Accept the new harmony if better;
end while
Find the current best harmony (solution);
end
Several examples of structural engineering problems that have
been solved using an HS algorithm are: optimization of truss
structures [50,72], optimization of pin connected structures
[73], minimum cost design of steel frames [41], and optimum de-
sign of steel frames [40,30,45], steel sway frames [57], cellular
beams [74] and reinforced concrete frames [75]. Several struc-
tural design optimization problems are tackled using an HS algo-
rithm, including sizing and configuration for a truss structure,
pressure vessel design, and welded beam design in [51]. See also
[44,46,31,76,77].
3.2. Simulated annealing
A Simulated Annealing (SA) algorithm [78,79] is a probabilistic
heuristic that mimics the annealing process used in materials sci-
ence. During this process, a material is heated to high tempera-
tures, causing atoms to move from their initial positions and
randomly move through higher energy states. As the temperature
of the material is slowly lowered, the atoms settle into a new con-
figuration that hopefully has a lower internal energy. Translating
this process to an optimization problem, the initial state can be
thought of as a local minimum. The heating of the material trans-
lates to replacing the current solution(s) with a new random solu-
tion(s). The new solution(s) may be accepted according to a
probability based on the resulting function value decrease and on
a ‘temperature’ measure, which slowly decreases as iterations con-
tinue. The temperature parameter allows for solutions to be ac-
cepted that may have a higher objective value, thus avoiding
local minima. The basic structure of a SA algorithm is as follows
[80]:
procedure SimulatedAnnealing
begin
Select an initial state i2Sand initial temperature T>0;
Set temperature change counter t:¼0;
while (bf not termination condition) do
begin
Set repetition counter n:¼0;
repeat
Generate state j, a randomly chosen neighbor of i;
Calculate d=f(j)f(i);
if d< 0 then i j
else if random (0,1) < exp (d/T), then i j;
n n+1;
until n=N(t);
t t+1;
T T(t);
end
end
SA algorithms for structural engineering have been used mainly
in design optimization. Some examples are optimizing tensegrity
systems [81] and the design optimization of truss structures [26],
laminated composite structures [82,36], steel frames [83,30],
cross-sections [84] and concrete frames [85]. See also
[86,42,77,58].
3.3. Ray optimization
Inspired by laws that govern the transition of a light ray from
one medium to another, Ray Optimization (RO) is relatively new
to structural engineering. The algorithm employs a number of
agents that search the space. Agents can be thought of as a par-
ticle of light, with a location and direction. At each iteration,
each agent computes an ‘origin’, which is a point defined by
the average of the best known solution and the best known
solution for the individual agent. Using Snell’s refraction law
and a small random perturbation, each agent’s direction is then
adjusted to move towards the ‘origin’ and the agents location is
updated by moving in the new direction. The basic structure of
RO is as follows [87]:
W. Hare et al. / Advances in Engineering Software 59 (2013) 19–28 21
Author's personal copy
procedure RayOptimization
begin
Generate initial conditions for agents i;
while (bf not termination condition) do
begin
if an agent violates boundary, then fix position;
Evaluate objective for each agent;
Determine the so-far best global solution g;
For each agent, determine the so-far best position and
store as local best b
i
;
Check stopping conditions;
Compute origin O
i
for each agent: O
i
¼
1
2
ðb
i
þgÞ;
Apply Snell’s refraction law and random perturbation to
determine
each agent’s movement towards their origin;
end
end
Ray optimization has been successfully applied to spring design,
welded beam design, and truss design [87,88].
3.4. Tabu search
The Tabu Search (TS) method, formally proposed by Glover in
1989 [89], is a local search heuristic that works with other algo-
rithms to overcome the restrictions of local optimality. It is applied
to constrained combinatorial optimization problems that are dis-
crete in nature.
To describe the process of a tabu search, we select an initial solu-
tion x2X,whereXis the feasible set. We let S(x)bethesetofmoves
that move xto an adjacent extreme point. Let T#S,whereTis the
set of tabu moves.ThesetTis determined by a function that employs
previous information from the search process up to titerations prior
to the current iteration. To determine membership in T, there may be
an itemized list or a set of tabu conditions, i.e.,
TðxÞ¼fs2S:sviolates the tabu conditionsg:
As a pseudocode, the TS method has the following form [89]:
procedure TabuSearch
begin
Select an initial x2X;
x
:¼x,T:¼;,k 0;
begin
if S(x)Tis empty
stop;
else
Set k k+1;
Select s
k
2S(x)Tsuch that s
k
(x) = OPTIMUM (s(x):
s2S(x)T));
end
Let x:¼s
k
(x);
if f(x)<f(x
)
x
:¼x;
end
Check stopping conditions;
Update T;
end
end
The TS allows an algorithm to store past information and uses it
to improve the steps taken; it can prevent an algorithm from con-
verging back to a local optimum. In structural engineering, it has
been used to optimize the structural weight of frames [90], to opti-
mize the design of steel structures [83,30] and truss structures
[26], and to evaluate the seismic performance of optimized frame
structures [91]. See also [45].
3.5. Strengths and limitations of physical and stochastic algorithms
Like evolutionary algorithms, there is no mathematical conver-
gence theory for physical and stochastic algorithms. These are de-
signed to break free from local minimizers and have often been
found successful in acquiring better global solutions than other
algorithms. However, the tendency to leave local minimizers can
cause difficultly in convergence. As such, physical and stochastic
algorithms are often used in conjunction with other algorithms
that are designed to zoom in on local solutions. As stochastic based
methods, it is difficult to reproduce results from such algorithms.
Running the same algorithm, on the same problem, may result in
widely different answers. It has be argued that this contradicts
the scientific desire of reproducibility of experiments. This can be
overcome by careful programming and retention of the ‘random’
strings used.
4. Swarm algorithms
Swarm algorithms imitate the processes of decentralized, self-
organized systems, which can be either natural or artificial in nat-
ure. The most commonly used swarm algorithms in structural
engineering model biological systems that use simple rules, which
result in the development of an ‘intelligent’ system behavior. The
following swarm algorithms will be discussed in the subsequent
sub-sections: ant colony optimization, particle swarm optimiza-
tion, shuffled frog-leaping, and artificial bee colony.
4.1. Ant colony optimization
As the name suggests, an Ant Colony Optimization (ACO) algo-
rithm follows the processes of an ant colony searching for food.
This algorithm, is a stochastic combinatorial optimization method
that uses mathematical principles from graph theory. Basically, it
models the process of ant foraging by pheromone communication
through path formation. A detailed description of the Ant System,
as originally named by Dorigo, Maniezzo and Colorni, can be found
in [92].
To discuss ACO, it is necessary to define some terms from graph
theory. A mathematical graph can be though of as a collection of
dots connected by a series of lines. Mathematically, the dots are
called nodes (or vertices) and represented by an index i. A lines con-
necting node ito node jis called an edge and represented by a pair
of indices (i,j). A path is a way of getting from one node to another
node by traveling along the edges.
In ACO, the optimization problem is formulated in terms of
determining the shortest path on graph. In brief, the positions for
each ant are selected and the pheromone trail intensities at itera-
tion t= 0, denoted by
s
ij
(t), for each edge (i,j) are initialized. There-
after, every ant moves from their current position to another
position based on a probability function, which is a function of
two ‘desirability measures’: pheromone trail intensities and visibil-
ity [92]. For a trail travelled by many ants, the pheromone intensity
will be strong, indicating a favorable path. The visibility measure
favors proximity of positions, making closer positions more
desirable.
After a set number of iterations, all ants will have completed a
tour of positions and a measure of the change in pheromone trail
intensities will be updated. This cycle will continue until either a
maximum number of cycles has been completed or every ant fol-
22 W. Hare et al. / Advances in Engineering Software 59 (2013) 19–28
Author's personal copy
lowed the same tour. The general structure of an ACO algorithm is
as follows [92]:
procedure AntColony
begin
For every edge (i,j), initialize
s
ij
(t);
Place mants on nnodes;
while (not termination condition) do
begin
for s=1,...,n1
for k=1,...,m
Move kth ant to node jusing probability function
p
k
ij
ðtÞ;
end
end
Move each ant to corresponding starting node;
Calculate each path length of tour, L
k
;
Update shortest path;
Update pheromone trail;
end
end
There are three processes that account for the successful nature
of this algorithm: positive feedback, distributed computation, and
a constructive greedy heuristic. Positive feedback is used as a
search and optimization tool. If a choice is made between different
‘path’ options and the result is good, then that choice will be more
favorable in the future. This results in the quick discovery of good
solutions. Distributed computation mimics the increased effective-
ness of a search carried out by a population of ants working
cooperatively together compared to the same number of ants
working individually. By incorporating this idea into the algorithm,
premature convergence is avoided. A greedy heuristic, i.e., only lo-
cally optimal moves are allowed, is used to ensure that reasonable
solutions are found early on in the search process.
Some structural engineering applications that ACO has been ap-
plied to include optimizing bridge deck rehabilitation [20], mini-
mum weight and compliance problems in structural topology
design [93] and design optimization of truss structures [72], con-
crete frames [75] and steel frames [39,30,31,49,94]. See also
[40,45,46,76,52,58].
4.2. Particle swarm optimization
Particle Swarm Optimization (PSO) algorithms mimic animal
flocking behaviors. These algorithms, originally accredited to Eber-
hart, Kennedy and Shi [95,96], have a similar stochastic nature to
GAs and like GAs, work with a set of potential solutions and the
concept of ‘fitness’. Essentially, particles (candidate solutions)
move around the search space, iteratively improving their fitness
value according to a given quality measure. Each particle is influ-
enced by its neighbor. Simple mathematical formulas for position
x
id
and velocity
v
id
are used to move the iparticles through the
d-dimensional hyperspace, accelerating towards ‘better’ solutions
pbest
i
. For a detailed description of PSO algorithms, see [95]. The
general structure of a PSO algorithm is as follows [97]:
procedure ParticleSwarm
begin
Initialize x
id
,
v
id
and pbest
i
for each i;
while (not termination condition) do
begin
for each i
Evaluate f(x
i
);
Update pbest
i
;
end
for each i
Set gequal to index of neighbor with best pbest
i
;
Use gto calculate
v
id
;
Update x
id
=x
id
+
v
id
;
Evaluate f(x
i
);
if f(x
i
)<pbest
i
Update pbest
i
;
end
end
end
end
PSO algorithms have been applied to several structural engi-
neering problems, such as the optimization of a transport aircraft
wing [98], optimizing bridge deck rehabilitation [20], optimization
of pin connected structures [73], structural damage identification
[86,47], continuum structural topology design [99] and optimum
design of reinforced concrete frames [75], cellular beams [74], steel
structures [30] and truss-structures [43,54,72,46,52]. See also
[37,44,45,76,53,77,58].
4.3. Shuffled frog-leaping
The Shuffled Frog-Leaping (SFL) method is a local search heuris-
tic proposed by Eusuff, Lansey and Pasha in 2006 [100]. It is of the
recent group of evolutionary memetic algorithms. A memetic algo-
rithm, like other swarm algorithms, is a population based approach
influenced by natural memetics. As the name suggests, the SFL
method mimics the actions of frogs in a swamp. Each stone is a
discrete location and the frogs are trying to find the stone with
the largest food source. The frogs are allowed to communicate with
other frogs to improve their position.
Basically, the SFL algorithm allows for the separate evolution of
communities, and then shuffles these communities. The shuffling
process results in local search information being exchanged be-
tween communities. This exchange of information helps the algo-
rithm move towards a global optimum. In general, the global
exploration SFL algorithm has the following form [100]:
procedure SuffledFrogLeaping
begin
Initialize number of memeplexes mand frogs in each
memeplex n;
Sample F=mn virtual frogs U(1), ...,U(F);
Compute performance value f(i) for each frog U(i);
Sort frogs in order of decreasing performance, store in array
X;
Set P
X
equal to best frog’s position;
while (not termination condition) do
begin
Partition frogs into memplexes Y
1
,...,Y
m
(nfrogs in each)
according to
Y
k
=[U(j)
k
,f(j)
k
jU(j)
k
=U(k+m(j1)), f(j)
k
=f(k+m(j1)),
j=1,...,n];
Memetic evolution within each memeplex (for details of
local exploration, see [100]);
Replace Y
1
,...,Y
m
into Xin order of decreasing
performance;
Update P
X
;
end
end
W. Hare et al. / Advances in Engineering Software 59 (2013) 19–28 23
Author's personal copy
SFL methods have been applied to the optimization of pipe sizes
for water distribution network design [42] and bridge deck repairs
[20,21].
4.4. Artificial bee colony
The Artificial Bee Colony (ABC) algorithm, proposed by Karabog-
a in 2005 [101], follows the food foraging behavior of honey bee
swarms. There are three groups of bees in the model: the scout
bees that fly randomly in the search space; the employed bees that
select a random solution to be perturbed based on the exploitation
of the neighborhood of their food sources; and the onlooker bees
are placed on each food source according to a probability based
selection process [102]. The algorithm is based on the amount of
nectar at each of the nfood sources, with onlookers having prefer-
ence to food sources with high probability values. If a new source
has a higher nectar amount than a source in their memory, then
the new position is updated and the previous position is forgotten.
If a predetermined number of trials controlled by the parameter
limit shows no improvement to a solution, then the food source
is abandoned, and the corresponding employed bee becomes a
scout bee. The general structure of the ABC algorithm is as follows
[102]:
procedure ArtificialBeeColony
begin
Initialize n,limit, food positions x
i
for i=1,...,neach with
dimension d;
Evaluate the fitness of each food position;
while (not termination condition) do
begin
Employed phase:
Produce new solutions with k2{1, ...,n}, j2{1, ...,d},
/2[0, 1] at random according to
v
ij
=x
ij
+/
ij
(x
ij
x
kj
);
Evaluate solutions;
Apply greedy selection for employed bees;
Onlooker phase:
Calculate probability values for each solution x
i
according to P
i
¼
f
i
P
n
j¼i
f
j
;
Produce new solutions from x
i
selected using P
i
;
Evaluate these solutions;
Apply greedy selection for onlooker bees;
Scout phase:
Find abandoned solution:
if limit exceeds
Replace with new random solution;
end
Update best solution;
end
end
For an introduction and references to the different bee optimi-
zation methods, see the introduction of [77] (Artificial Bee Colony
Algorithm). The ABC algorithm described above has been applied
to structural optimization problems involving truss structures
[44,77,58], laminated composite components [53], inverse analysis
of dam-foundation systems [48] and welded beam and coil spring
design [55].
4.5. Strengths and limitations of swarm algorithms
Like other heuristic methods, there is no mathematical con-
vergence theory for swarm algorithms. Swarm algorithms are
often designed with very specific problems in mind, and as a
result may be ineffective on problems with different structures.
However, when applied to the specific problem they are de-
signed for, swarm algorithms have be found to be highly
effective.
5. Direct search methods
The research area of Derivative-free Optimization has blos-
somed in recent years. As previously stated, these methods do
not require derivative information and have mathematical conver-
gence theory. The following Derivative-free Optimization algo-
rithms will be discussed in the subsequent sub-sections:
directional direct search, simplicial direct search, simplex gradient
methods and trust region methods.
5.1. Directional direct search
In Directional Direct Search (DDS) methods, a set of directions
with suitable features are used to generate a finite set of points
at which the objective function is evaluated. An example of such
a set of directions is a positive basis. A finite set, or and infinite
set of positive bases maybe used during the algorithm. Another
example is an integer lattice, which is constructed from a positive
basis. A well known class of mesh based directional direct search
methods is Mesh Adaptive Direct Search (MADS), proposed by Au-
det and Dennis in 2006 [103]. The general structure of a DDS meth-
od is as follows [104]:
procedure DirectSearch
begin
Initialize x
0
and a set of directions D;
while (not termination condition) do
begin
Search for a point with f(x)<f(x
k
) (optional);
Poll points from fx
k
þ
a
k
d:d2D
k
ð2 DÞg;
if f(x
k
+
a
k
d
k
)<f(x
k
)
Stop polling;
x
k+1
x
k
+
a
k
d
k
;
else
x
k+1
x
k
;
Update mesh parameter
a
k
;
end
end
end
DDS methods have been applied to structural engineering prob-
lems such as optimizing braced steel frameworks [105], structural
damage detection [106], and design optimization of reinforced
concrete flat slab buildings [33] and viscous dampers [35].In
[82], a set of current configurations are used with a simulated
annealing framework to create a direct search simulated annealing
(DSA) method for design optimization of laminated composite
structures.
5.2. Simplicial direct search
In Simplicial Direct Search (SDS) methods, the algorithm evalu-
ates the function at a set of points that form a simplex and uses
those function values to decide the next move. A simplex in R
n
is
the convex hull of a set of n+ 1 affinely independent points. By
evaluating the function at a set of points that forms a simplex,
the algorithm collects sufficient information from around the cur-
rent iterate. (A shifted set of n+ 1 affinely independent points
forms a set of linearly independent points, i.e., the shifted set spans
24 W. Hare et al. / Advances in Engineering Software 59 (2013) 19–28
Author's personal copy
R
n
.) The most well known simplex based simplicial direct search
method is the Nelder-Mead method [107] (also known as the
NM, the amoeba or the adaptive simplex method). We note that
the original Nelder-Mead method proposed by Nelder and Mead
in 1965 [107] does not have convergence theory, but many vari-
ants of the method do.
SDS methods have been used in several structural engineering
applications, including structural damage identification [86], truss
design optimization [27] and estimation of a crack location and
depth in a cantilever beam [37]. See also [36].
5.3. Simplex gradient methods
A simplex gradient method (SGM) uses a simplex gradient in-
stead of the true gradient to generate search directions that point
towards nearby (local) minimizers. A simplex gradient is the gradi-
ent of the linear interpolation over a set of n+ 1 points in R
n
. Unlike
SDS methods, which use simplices to provide a set of directions to
evaluate the function on, SGMs calculate simplex gradients to find
descent directions. The general structure of an SGM method is as
follows [104]:
procedure SimplexGradient
begin
Initialize x
0
, simplex Y
0
, search radius
D
0
and simplex
accuracy measure
l
k
;
Initilize line search Armijo-like parameter
g
;
while (not termination condition) do
begin
Compute a simplex gradient r
S
f(x
k
) such that
D
k
6
l
k
kr
S
f(x
k
)k;
Line search: Find t
k
> 0 such that f(x
k
t
k
)
r
S
f(Y
k
)) < f(x
k
)
g
t
k
kr
S
f(Y
k
)k
2
;
if do not find t
k
>0
Decrease
l
k
;
else
Let x
kþ1
¼arg min
y2S
k
ffðyÞg, where S
k
contains all
f-evals from this iteration;
end
end
end
An application of an SGM in structural engineering is seen in
[55] for welded beam and coil spring design.
The Robust Approximate Gradient Sampling (RAGS) algorithm is
a novel derivative-free optimization algorithm for finite minimax
problems, proposed by Hare and Nutini in 2012 [108].TheRAGS
method is an improvement on SGMs for structured functions. By
exploiting the substructure of the finite max function, the RAGS algo-
rithm is able to minimize along non-differentiable ridges of non-
smooth functions and converge to minima of the objective function.
The general structure of the RAGS algorithm is as follows [108]:
procedure RAGS
begin
Initialize x
0
, search radius
D
0
, Armijo-like parameter
g
and
other parameters;
begin
Generate a set of n+ 1 points;
Use points to generate robust approximate subdifferential G
k
Y
;
Set search direction d
k
Y
¼Proj 0jG
k
Y

;
if
D
k
small, but jd
k
jlarge
Carry out line search: find t
k
> 0 such that
f(x
k
+t
k
d
k
)<f(x
k
)
g
t
k
jd
k
j
2
;
Success: update x
k
and loop;
Failure: decrease accuracy measure and loop;
else if
D
k
large
Decrease
D
k
and loop;
else
Terminate;
end
end
end
In [38], the RAGS algorithm is shown to be a quick converging
and efficient method for solving the problem of minimizing the
maximum inter story drift between two buildings.
5.4. Trust-region methods
Trust-region (TR) methods locally minimize quadratic models
of the objective function over regions where the quadratic model
is ‘‘trusted’’ to be accurate. TR methods are Derivative-free Optimi-
zation methods, with an abundant number of publications illus-
trating the supporting convergence analysis and theory. An
example of a TR method used in a practical application can be
found in [109], where the design of a vehicle door is optimized.
To the authors’ knowledge, TR methods have not been applied to
any applications in structural engineering to date.
5.5. Strengths and limitations of derivative-free optimization
Derivative-free Optimization’s strongest aspect is its mathe-
matical convergence theory that guarantees the quality of the final
solution. This makes Derivative-free Optimization very well suited
for applying as a final step to ensure local optimality. However,
Derivative-free Optimization methods typically scale poorly with
dimension, so many require very large numbers of function calls
in problems where the number of variables is very large.
6. Discussion and conclusion
In the previous sections, we provided multiple references that
use non-gradient methods in structural engineering applications.
We provide a summary of the methods in Table 2.InTable 1 below,
we summarize a few of the papers that compared the performance
of several non-gradient methods on one application. While many
other papers compared various non-gradient methods, we limit
ourselves to those that do the comparison using a structural engi-
neering problem and use at least two algorithms discussed in this
article.
1. Other memetic methods.
2. Directional direct search (MADS) and URAGS.
3. UEvolutionary strategies.
4. UAdaptive harmony search.
5. Branch-and-bound metaheuristic.
6. UBee colony optimization and simplex method.
It is obvious that non-gradient methods are well used in struc-
tural engineering applications. As seen in Table 1, there are three
papers that compare non-gradient methods with gradient based
methods [43,54,55].In[43], the presented PSO method is shown
to generate comparable results to several gradient based methods.
W. Hare et al. / Advances in Engineering Software 59 (2013) 19–28 25
Author's personal copy
In [55], the presented bee colony algorithm is also shown to gener-
ate comparable results to a gradient based method. In [54], the gra-
dient based methods win. As stated before, non-gradient methods
are useful when gradient information is unavailable, unreliable or
expensive in terms of computation time. However, when com-
pared against a gradient based method for a function with gradient
information, a non-gradient method will almost always come up
short.
We also observe from Table 1, as well as Section 2, that evolu-
tionary algorithms are the most commonly used non-gradient
methods in structural optimization. Both discrete and continuous
problems can be handled by evolutionary algorithms, as well as
constrained or unconstrained problems. Indeed, GAs are very ver-
satile with respect to the types of problems they can be applied
to. (For a complete summary table of the methods presented in this
paper and the types of problems they can be applied to, see Table 2
at the end of this section.)
However, this does not necessarily imply that evolutionary
algorithms are the most appropriate method for black-box prob-
lems in structural engineering. In fact, we see multiple papers
using evolutionary algorithms as benchmark methods to compare
other methods against (see the end of Section 2.1). In all of these
papers, evolutionary algorithms are shown to perform comparably
or worse with respect to efficiency and solution quality. This obser-
vation does spark the suggestion that evolutionary algorithms may
be overused, specifically for continuous problems. As evolutionary
algorithms were originally designed for discrete problems, it is not
surprising to see an evolutionary algorithm come in second place
to an algorithm designed for continuous problems.
This being said, it is worth noting that it would be inaccurate to
conclude that evolutionary algorithms are ‘bad’. Most of the papers
in question focus on newly designed algorithms by the papers’
authors. As such, evolutionary algorithms used may not have been
optimally adjusted to the problem in question. It is safe to say that
it may be beneficial to use a method designed to deal with the spe-
cific structure of the problem under consideration.
We see methods designed for specific problems in the area of
Derivative-free Optimization. Since these algorithms have support-
ing convergence theory, specific assumptions are usually made
about the objective function. Supporting convergence theory al-
lows us to escape the uncertainty of a heuristic method; we know
that when a Derivative-free Optimization algorithm terminates, it
has found a locally optimal (or in the case of a convex function,
globally optimal) solution. Furthermore, Derivative-free Optimiza-
Table 2
Summary of Methods: section number, algorithm, abbreviation for algorithm, description of algorithm, if algorithm is used for constrained or unconstrained, discrete or
continuous optimization problems (bold indicates algorithm is primarily used for problems of that type) and any additional assumptions on the problem.
Section Algorithm Abbr. Description Constrained/ Discrete/ Other assumptions
unconstrained continuous on problem
2.1 Genetic GA Evolutionary gene- constrained and discrete and –
algorithm based heuristic unconstrained continuous
3.1 Harmony HS Music inspired Constrained and Discrete and –
search heuristic unconstrained continuous
3.2 Simulated SA Materials science Constrained and Discrete and Large
annealing based heuristic unconstrained continuous search space
3.3 Ray RO Light ray Constrained Continuous
optimization based heuristic
3.4 Tabu TS Local search Constrained Discrete and Combinatorial
search heuristic continuous
4.1 Ant colony ACO Stochastic pheromone Unconstrained and Discrete Stochastic
optimization mimicking heuristic constrained combinatorial
4.2 Particle PSO Flocking behavior Unconstrained and Continuous and –
swarm opt. based heuristic constrained discrete
4.3 Shuffled SFL Evolutionary meme- Constrained and discrete Combinatorial
frog-leaping based heuristic unconstrained
4.4 Bee colony BCO Bee foraging Unconstrained and Continuous and Functional,
optimization based heuristic constrained discrete combinatorial
5.1 Directional DDS DFO mesh/lattice Unconstrained and Continuous Non-linear
direct search based algorithm constrained
5.2 Simplicial SDS DFO simplex Unconstrained and Continuous
direct search based algorithm constrained
5.3 Robust approx. RAGS DFO substructure Unconstrained Continuous Finite minimax
grad. sampling exploiting algorithm problem
Table 1
Comparison of methods: reference, application description and algorithms compared (indicates that the corresponding algorithm was compared, Uindicates the ‘winning’
algorithm(s)).
Refs. Application Grad. GA HS TS SA ACO PSO SFL Other
[20] Bridge deck rehabilitation U1
[38] Seismic dampers 2
[40] Steel frames U
[30] Steel frames   3
[45] Steel frames    4
[83] Steel frames UU
[43] Truss-structures UU
[26] Truss-strucutres U5
[54] Truss-structures U
[42] Water distribution network design  U
[55] Welded beam/coil design U6
26 W. Hare et al. / Advances in Engineering Software 59 (2013) 19–28
Author's personal copy
tion methods can easily incorporate a heuristic on top of their reg-
ular structure to decrease convergence time and increase solution
quality.
As seen in Table 1, heuristics such as SA and TS are commonly
used in structural engineering. Similar to evolutionary algorithms,
a heuristic has very few limitations as to what type of problem it
can be applied to. As stated above, heuristics are often used in con-
junction with other algorithms, which is just another way that
algorithms can be easily tailored to the problem at hand.
In conclusion, non-gradient methods are widely used in struc-
tural engineering applications. Most dominantly, we see heuristics
being applied to various problems. The strengths of these methods
include their flexibility and versatility to be applied to multiple dif-
ferent problem types. For difficult, restrictive problems, these
methods are easy to implement and can provide reasonable solu-
tions. However, by tailoring an optimization method to or using
a method that is tailored to the problem at hand, a significant in-
crease in solution quality and efficiency of the algorithm can be
observed.
References
[1] Hasançebi O, Erbatur F. Layout optimisation of trusses using simulated
annealing. Adv Eng Softw 2002;33(7–10):681–96.
[2] Tang W, Tong L, Gu Y. Improved genetic algorithm for design optimization of
truss structures with sizing, shape and topology variables. Int J Numer
Methods Eng 2005;62(13):1737–62.
[3] Adeli H, Kamal O. Efficient optimization of plane trusses. Adv Eng Softw
Workst 1991;13(3):116–22.
[4] Miguel LFF, Lopez RH, Miguel LFF. Multimodal size, shape, and topology
optimisation of truss structures using the firefly algorithm. Adv Eng Softw
2013;56:23–37.
[5] Jenkins WM. Towards structural optimization via the genetic algorithm.
Comput Struct 1991;40(5):1321–7.
[6] Lagaros ND, Papadrakakis M, Kokossalakis G. Structural optimization using
evolutionary algorithms. Comput Struct 2002;80(7–8):571–89.
[7] Saka MP. Optimum design of steel frames using stochastic search techniques
based on natural phenomena: a review. Civil engineering computations: tools
and techniques. Stirlingshire, UK: Saxe-Coburg Publications; 2007. p. 105–47
[chapter 6].
[8] Sonmez FO. Structural optimization using simulated annealing. InTech; 2008.
p. 281–306 [chapter 14].
[9] Poli R. Analysis of the publications on the applications of particle swarm
optimisation. J Artif Evol Appl 2008:10.
[10] Geem ZW. Harmony search algorithms for structural design optimization. 1st
ed. Springer Publishing Company, Incorporated,; 2009.
[11] Lamberti L, Pappalettere C. Metaheuristic design optimization of skeletal
structures: a review. Comput Technol Rev 2011;4:1–32.
[12] Saka MP, Dogan E. Recent developments in metaheuristic algorithms: a
review. Comput Technol Rev 2012;5:31–78.
[13] Kaveh A, Motie S, Mohammad A, Moslehi M. Magnetic charged system
search: a new meta-heuristic algorithm for optimization. Acta Mech
2013;224:85–107.
[14] Kaveh A, Talatahari S. A novel heuristic optimization method: charged system
search. Acta Mech 2010;213:267–89.
[15] Holland JH. Adaptation in natural and artificial systems. Ann Arbor, MI,
USA: University of Michigan Press; 1975.
[16] Gen M, Cheng R. Genetic algorithms and engineering optimization
(engineering design and automation). Wiley-Interscience; 1999.
[17] Deng L, Ghosn M, Shao S. Development of a shredding genetic algorithm for
structural reliability. Struct Safety 2005;27(2):113–31.
[18] Wang J, Ghosn M. Linkage-shredding genetic algorithm for reliability
assessment of structural systems. Struct Safety 2005;27(1):49–72.
[19] Furuta H, Maeda K, Watanabe E. Application of genetic algorithm to aesthetic
design of bridge structures. Comput-Aid Civil Infrastruct Eng 1995;10(6):
415–21.
[20] Elbeltagi E, Elbehairy H, Hegazy T, Grierson D. Evolutionary algorithms for
optimizing bridge deck rehabilitation. In: Soibelman Lucio, Pena-Mora
Feniosky, editors. Proceedings of the 2005 ASCE international conference on
computing in civil engineering, vol. 179. ASCE; 2005. 12 pp.
[21] Elbehairy H, Elbeltagi E, Hegazy T, Soudki K. Comparison of two evolutionary
algorithms for optimization of bridge deck repairs. Comput-Aid Civil
Infrastruct Eng 2006;21(8):561–72.
[22] Fu K, Zhai Y, Zhou S. Optimum design of welded steel plate girder bridges
using a genetic algorithm with elitism. J Bridge Eng 2005;10(3):291–301.
[23] García-Pérez J, Castellanos F, Díaz O. Optimum seismic zoning for multiple
types of structures. Earthq Eng Struct Dynam 2003;32(5):711–30.
[24] Li J, Liu W, Bao Y. Genetic algorithm for seismic topology optimization of
lifeline network systems. Earthq Eng Struct Dynam 2008;37(11):1295–312.
[25] Dede T, Bekirog
˘lu S, Ayvaz Y. Weight minimization of trusses with genetic
algorithm. Appl Soft Comput 2011;11(2):2565–75.
[26] Manoharan S, Shanmuganathan S. A comparison of search mechanisms for
structural optimization. Comput Struct 1999;73(15):363–72.
[27] Rahami H, Kaveh A, Aslani M, Najian Asl R. A hybrid modified genetic-Nelder
Mead simplex algorithm for large-scale truss optimization. Iran Univ Sci
Technol 2011;1(1):29–46.
[28] Balling R, Briggs R, Gillman K. Multiple optimum size/shape/topology designs
for skeletal structures using a genetic algorithm. J Struct Eng 2006;132(7):
1158–65.
[29] Burns SA, editor. State of the art on the use of genetic algorithms in design of
steel structures. ASCE; 2002. p. 55–77 [chapter 3].
[30] Hasançebi O, ÇarbasßS, Dog
˘an E, Erdal F, Saka MP. Comparison of non-
deterministic search techniques in the optimum design of real size steel
frames. Comput Struct 2010;88(17–18):1033–48.
[31] Kaveh A, Talatahari S. An improved ant colony optimization for the design of
planar steel frames. Eng Struct 2010;32(3):864–73.
[32] Park H, Kwon Y, Seo J, Woo B. Distributed hybrid genetic algorithms for
structural optimization on a pc cluster. J Struct Eng 2006;132(12):1890–7.
[33] Sahab MG, Ashour AF, Toropov VV. A hybrid genetic algorithm for reinforced
concrete flat slab buildings. Comput Struct 2005;83(8–9):551–9.
[34] Khedr MAH. Optimum design of steel telecommunication poles using genetic
algorithms. Can J Civil Eng 2007;34(12):1567–76.
[35] Bigdeli K, Hare W, Tesfamariam S. Optimal design of viscous damper
connectors for adjacent structures using genetic algorithm and Nelder-
Mead algorithm. In: Proceedings of SPIE conference on smart structures and
materials. SPIE; 2012.
[36] Akbulut M, Sonmez FO. Design optimization of laminated composites using a
new variant of simulated annealing. Comput Struct 2011;89(17–18):
1712–24.
[37] Vakil Baghmisheh MT, Peimani M, Sadeghi MH, Ettefagh MM, Tabrizi AF. A
hybrid particle swarmNelderMead optimization method for crack detection
in cantilever beams. Appl Soft Comput 2012;12(8):2217–26.
[38] Bigdeli K, Hare W, Nutini J, Tesfamariam S. Optimal design of damper
connectors for adjacent buildings. Comput Struct, submitted for publication.
20 pp.
[39] Camp CV, Bichon BJ, Stovall SP. Design of steel frames using ant colony
optimization. J Struct Eng 2005;131(3):369–79.
[40] Degertekin SO. Optimum design of steel frames using harmony search
algorithm. Struct Multidiscip Optimiz 2007;36(4):393–401.
[41] Degertekin SO, Hayalioglu MS. Harmony search algorithm for minimum cost
design of steel frames with semi-rigid connections and column bases. Struct
Multidiscip Optimiz 2010;42(5):755–68.
[42] Eusuff MM, Lansey KE. Optimization of water distribution network design
using the shuffled frog leaping algorithm. J Water Resour Plan Manage
2003;129(3):210–25.
[43] Fourie PC, Groenwold AA. The particle swarm optimization algorithm in
size and shape optimization. Struct Multidiscip Optimiz 2002;23(4):
259–67.
[44] Hadidi A, Kazemzadeh Azad S, Kazemzadeh Azad S. Structural optimization
using artificial bee colony algorithm. In: 2nd International conference on
engineering optimization; September 2010.
[45] Hasançebi O, Erdal F, Saka M. Adaptive harmony search method for structural
optimization. J Struct Eng 2010;136(4):419–31.
[46] Jansen PW, Perez RE. Constrained structural design optimization via a parallel
augmented lagrangian particle swarm optimization approach. Comput Struct
2011;89(13–14):1352–6.
[47] Kang F, Li J, Xu Q. Damage detection based on improved particle swarm
optimization using vibration data. Appl Soft Comput 2012;12(8):2329–35.
[48] Kang F, Li J, Xu Q. Structural inverse analysis by hybrid simplex artificial bee
colony algorithms. Comput Struct 2009;87(13–14):861–70.
[49] Kaveh A, Farahmand Azar B, Hadidi A, Rezazadeh Sorochi F, Talatahari S.
Performance-based seismic design of steel frames using ant colony
optimization. J Construct Steel Res 2010;66(4):566–74.
[50] Lee KS, Geem ZW. A new structural optimization method based on the
harmony search algorithm. Comput Struct 2004;82(910):781–98.
[51] Lee KS, Geem ZW. A new meta-heuristic algorithm for continuous
engineering optimization: harmony search theory and practice. Comput
Methods Appl Mech Eng 2005;194(3638):3902–33.
[52] Luh GC, Lin CY. Optimal design of truss-structures using particle swarm
optimization. Comput Struct 2011;89(2324):2221–32.
[53] Omkar SN, Senthilnath J, Khandelwal R, Narayana Naik G, Gopalakrishnan S.
Artificial bee colony (ABC) for multi-objective design optimization of
composite structures. Appl Soft Comput 2011;11(1):489–99.
[54] Perez RE, Behdinan K. Particle swarm approach for structural design
optimization. Comput Struct 2007;85(19–20):1579–88.
[55] Pham DT, Ghanbarzadeh A, Otri S, Koç E. Optimal design of mechanical
components using the bees algorithm. Proc Inst Mech Eng, Part C: J Mech Eng
Sci 2009;223(5):1051–6.
[56] Rao ARM, Shyju PP. A meta-heuristic algorithm for multi-objective optimal
design of hybrid laminate composite structures. Comput-Aid Civil Infrastruct
Eng 2010;25(3):149–70.
[57] Saka MP. Optimum design of steel sway frames to bs5950 using harmony
search algorithm. J Construct Steel Res 2009;65(1):36–43.
[58] Sonmez M. Discrete optimum design of truss structures using artificial bee
colony algorithm. Struct Multidiscip Optimiz 2011;43(1):85–97.
W. Hare et al. / Advances in Engineering Software 59 (2013) 19–28 27
Author's personal copy
[59] Rechenberg I. Cybernetic solution path of an experimental problem. Library
Trans 1965;1122.
[60] Rechenberg I. Evolutionsstrategie: optimierung technischer systeme nach
prinzipien der biologischen evolution. Ph.D. thesis; 1971.
[61] Schwefel H-P. Kybernetische evolution als strategie der exprimentellen
forschung in der strömungstechnik. M.Sc. thesis; 1965.
[62] Schwefel H-P. Evolutionsstrategie und numerische optimierung. Dissertation;
1975.
[63] Beyer H-G, Schwefel H-P. Evolution strategies: a comprehensive introduction.
Nat Comput: Int J 2002;1(1):3–52.
[64] Thierauf G, Cai J. Parallel evolution strategy for solving structural
optimization. Eng Struct 1997;19(4):318–24.
[65] Hasançebi O. Optimization of truss bridges within a specified design domain
using evolution strategies. Eng Optimiz 2007;39(6):737–56.
[66] Hasançebi O. Adaptive evolution strategies in structural optimization:
enhancing their computational performance with applications to large-
scale structures. Comput Struct 2008;86(1–2):119–32.
[67] Papadrakakis M, Lagaros ND, Tsompanakis Y. Structural optimization using
evolution strategies and neural networks. Comput Methods Appl Mech Eng
1998;156(1–4):309–33.
[68] Chen TY, Chen HC. Mixed-discrete structural optimization using a rank-niche
evolution strategy. Eng Optimiz 2009;41(1):39–58.
[69] Muc A, Muc-Wierzgon
´M. An evolution strategy in structural optimization
problems for plates and shells. Compos Struct 2012;94(4):1461–70.
[70] Geem ZW, Kim J, Loganathan GV. A new heuristic optimization algorithm:
harmony search. Trans Soc Model Simul Int 2001;76(2):60–8.
[71] Geem Zong Woo. Music-inspired harmony search algorithm: theory and
applications. Studies in computational intelligence, vol. 191. Springer; 2009.
206 pp.
[72] Kaveh A, Talatahari S. Particle swarm optimizer, ant colony strategy and
harmony search scheme hybridized for optimization of truss structures.
Comput Struct 2009;87(56):267–83.
[73] Li LJ, Huang ZB, Liu F, Wu QH. A heuristic particle swarm optimizer for
optimization of pin connected structures. Comput Struct 2007;85(78):340–9.
[74] Erdal F, Dog
˘an E, Saka MP. Optimum design of cellular beams using harmony
search and particle swarm optimizers. J Construct Steel Res 2011;67(2):
237–47.
[75] Kaveh A, Sabzi O. A comparative study of two meta-heuristic algorithms for
optimum design of reinforced concrete frames. Int J Civil Eng 2011;9(3):193–206.
[76] Kaveh A, Talatahari S. Charged system search for optimal design of frame
structures. Appl Soft Comput 2012;12(1):382–93.
[77] Sonmez M. Artificial bee colony algorithm for optimization of truss
structures. Appl Soft Comput 2011;11(2):2406–18.
[78] Kirkpatrick S, Gelatt CD, Vecchi MP. Optimization by simulated annealing.
Science 1983;220(4598):671–80.
[79] C
ˇerny
´V. Thermodynamical approach to the traveling salesman problem: an
efficient simulation algorithm. J Optimiz Theory Appl 1985;45(1):41–51.
[80] Eglese RW. Simulated annealing: a tool for operational research. Eur J Oper
Res 1990;46(3):271–81.
[81] Xu X, Luo Y. Force finding of tensegrity systems using simulated annealing
algorithm. J Struct Eng 2010;136(8):1027–31.
[82] Akbulut M, Sonmez FO. Optimum design of composite laminates for
minimum thickness. Comput Struct 2008;86(21–22):1974–82.
[83] Ohsaki M, Kinoshita T, Pan P. Multiobjective heuristic approaches to seismic
design of steel frames with standard sections. Earthq Eng Struct Dynam
2007;36(11):1481–95.
[84] Serra M. Optimum design of thin-walled closed cross-sections: a numerical
approach. Comput Struct 2005;83(4–5):297–302.
[85] Paya I, Yepes V, González-Vidosa F, Hospitaler A. Multiobjective optimization
of concrete frames by simulated annealing. Comput-Aid Civil Infrastruct Eng
2008;23(8):596–610.
[86] Begambre O, Laier JE. A hybrid particle swarm optimization simplex
algorithm (PSOS) for structural damage identification. Adv Eng Softw
2009;40(9):883–91.
[87] Kaveh A, Khayatazad M. A new meta-heuristic method: ray optimization.
Comput Struct 2012;112–113:283–94.
[88] Kaveh A, Khayatazad M. Ray optimization for size and shape optimization of
truss structures. Comput Struct 2013;117:82–94.
[89] Glover F. Tabu search – part 1. ORSA J Comput 1989;1(2):190–206.
[90] Kargahi M, Anderson JC, Dessouky MM. Structural weight optimization of
frames using tabu search. I: Optimization procedure. J Struct Eng
2006;132(12):1858–68.
[91] Kargahi M, Anderson JC. Structural weight optimization of frames using tabu
search. II: Evaluation and seismic performance. J Struct Eng 2006;132(12):
1869–79.
[92] Dorigo M, Maniezzo V, Colorni A. The ant system: optimization by a colony of
cooperating agents. IEEE Trans Syst Man Cybernet – Part B 1996;26(1):29–41.
[93] Luh GC, Lin CY. Structural topology optimization using ant colony
optimization algorithm. Appl Soft Comput 2009;9(4):1343–53.
[94] Aydog
˘du _
I, Saka MP. Ant colony optimization of irregular steel frames
including elemental warping effect. Adv Eng Softw 2012;44(1):150–69.
CIVIL-COMP.
[95] Kennedy J, Eberhart R. Particle swarm optimization. Proceedings of IEEE
international conference on neural networks, 1995, vol. 4. IEEE; 1995. p.
1942–8.
[96] Shi Y, Eberhart R. A modified particle swarm optimizer. In: The 1998 IEEE
International Conference on IEEE World Congress on Computational
Intelligence. Evolutionary Computation Proceedings; 1998. p. 69–73.
[97] Kennedy J. Swarm intelligence. In: Handbook of nature-inspired and
innovative computing. Springer; 2006. p. 187–219.
[98] Venter G, Sobieszczanski-Sobieski J. Multidisciplinary optimization of a
transport aircraft wing using particle swarm optimization. Struct
Multidiscip Optimiz 2004;26(1–2):121–31.
[99] Luh GC, Lin CY, Lin YS. A binary particle swarm optimization for
continuum structural topology optimization. Appl Soft Comput 2011;
11(2):2833–44.
[100] Eusuff M, Lansey K, Pasha F. Shuffled frog-leaping algorithm: a memetic
meta-heuristic for discrete optimization. Eng Optimiz 2006;38(2):
129–54.
[101] Karaboga D. An idea based on honey bee swarm for numerical optimization.
Technical report TR06, Erciyes University; October 2005.
[102] Parpinelli RS, Benitez CMV, Lopes HS. Parallel approaches for the artificial bee
colony algorithm. Springer; 2011. p. 329–345.
[103] Audet C, Dennis Jr JE. Mesh adaptive direct search algorithms for constrained
optimization. SIAM J Optimiz 2006;17(1):188–217.
[104] Conn A, Scheinberg K, Vicente L. Introduction to derivative-free optimization.
MPS/SIAM series on optimization, vol. 8. SIAM; 2009.
[105] Baldock R, Shea K, Eley D. Evolving optimized braced steel frameworks for tall
buildings using modified pattern search. In: Soibelman Lucio, Pena-Mora
Feniosky, editors. Proceedings of the 2005 ASCE international conference on
computing in civil engineering, vol. 179. ASCE; 2005. 12 pp.
[106] Kourehli SS, Ghodrati Amiri G, Ghafory-Ashtiany M, Bagheri A. Structural
damage detection based on incomplete modal data using pattern search
algorithm. J Vib Contr 2012.
[107] Nelder JA, Mead R. A simplex method for function minimization. Comput J
1965;7(4):308–13.
[108] Hare W, Nutini J. A derivative-free approximate gradient sampling algorithm
for finite minimax problems. Comput Optimiz Appl 2013, accepted for
publication. 33 pp.
[109] Chen G, Han X, Liu G, Jiang C, Zhao Z. An efficient multi-objective
optimization method for black-box functions using sequential approximate
technique. Appl Soft Comput 2012;12(1):14–27.
28 W. Hare et al. / Advances in Engineering Software 59 (2013) 19–28
... Zbatimi i tyre është i pavarur nga mjedisi i analizës strukturore dhe është i ndërtuar në një sistem të koduar paralelisht me kushtet e problemit. Në këtë studim janë marr si referim studime të autorëve të ndryshëm të cilët kanë analizuar dhe studiuar metodat jo-deterministike si Kazemzadeh [5], Kicinger [26], Lamberti [27], Hare [28], dhe të tjerë. Bazuar në sistemin të marr në referim për ndërtimin, kemi një nëndarje të metodave në: ...
... Shoqëruar me përshkrimet për cdo algoritëm do të jepet një strukturë e përgjithshme algoritmike bazuar në Hare [28]. ...
... Një strukturë e thjeshtuar e (SA)-së bazuar në Hare [28], jepet si vijon: procedure SA (*ftohja e simuluar) fillo zgjidh një gjendje fillestare ∈ dhe një temperaturë fillestare > 0 ; Vendos shkallën e ndryshimit të temperaturës; ndërsa fillo Caktohet numri i iteracioneve përsërit Gjenerohet një gjendje e caktuar shënuar me indeksin j dhe me një vlerë rastësore i; ...
... The gradientbased optimization algorithms like Optimality Criteria (OC) (Sigmund, 2001;Shukla and Misra, 2013;Groenwold and Etman, 2008), Method of Moving Asymptote (MMA) (Li and Khandelwal, 2014;Tavakkoli et al., 2013;Qin and Zhu, 2010), and Sequential Linear Programming (SLP) (Fujii and Kikuchi, 2000;Gomes and Senne, 2011) have a good performance when derivative of the objective function is available to calculated the sensitivity. In the TO problems that derivative of the objective function is not available or is hard to calculate, for example when simulations are employed to imitate problem conditions (Hare et al., 2013), the non gradient-based methods could be applied. Genetic Algorithm (GA) and Simulated annealing (SA) are two of the most common methods which has been used in the non-gradient-based TO algorithms (Hare et al., 2013;Chapman et al., 1994;Anagnostou et al., 1992;Garcia-Lopez et al., 2011). ...
... In the TO problems that derivative of the objective function is not available or is hard to calculate, for example when simulations are employed to imitate problem conditions (Hare et al., 2013), the non gradient-based methods could be applied. Genetic Algorithm (GA) and Simulated annealing (SA) are two of the most common methods which has been used in the non-gradient-based TO algorithms (Hare et al., 2013;Chapman et al., 1994;Anagnostou et al., 1992;Garcia-Lopez et al., 2011). ...
... In the case of non-gradientbased TO methods, the sensitivity filters can not be used. Since the search algorithm to generate new solutions is based on a random method, the final results have more gray area and discontinued regions (Hare et al., 2013;Garcia-Lopez et al., 2011). ...
Article
Topology optimization (TO) is a mathematical method of determining distribution of material in a design domain to achieve maximum performance for the desired application. Non-gradient-based topology optimization methods are beneficial for the problems in which the derivative of the objective function is not easy or even possible to calculate. The results coming from such methods include gray area and discontinuity based on the optimization algorithm. In this paper, a post-processing algorithm is presented to improve the results from a non-gradient topology optimization simulated annealing based process. It has been shown that the results using this post-processing method have less gray areas by fixing the densities of the elements. Therefore, better compliance values obtained for the cantilever and MBB beams problems regarding the results in the literature. The main advantage of post-processing is that the number of iterations can be reduced without sacrificing the quality of the results. This leads to improving the results as well as reducing the calculation costs by the faster convergence.
... A review for the discrete variable optimization is given by Arora [13]. A review of non-deterministic approaches with algorithmic steps was done by Hare [14]. ...
... They are not too difficult in coding, compared to deterministic methods. Some reviews on these methods have been done in the last years by Kazemzadeh [7], Kicinger [29], Lamberti [30], Hare [14], and others. Basically the aim of these methods is the location of the global optimum, by generation of candidate solutions in an iterative way. ...
... Nowadays, metaheuristic algorithms have been applied to various engineering optimization problems [3,[5][6][7][8]. These algorithms require neither gradient information of objective function nor those of constraints and have been inspired by physical or natural phenomena. ...
Article
Full-text available
Natural frequencies of a structure give useful information about the structural response to dynamic loading. These frequencies should be far enough from the critical frequency range of dynamic excitations like earthquakes in order to prevent the resonance phenomenon sufficiently. Although there are many investigations on optimization of truss structures subjected to frequency constraints, just a few studies have been considered for optimal design of frame structures under these constraints. In this paper, a recently proposed metaheuristic algorithm called Adaptive Charged System Search (ACSS) is applied to optimal design of steel frame structures considering the frequency constraints. Benchmark design examples are solved with the ACSS, and optimization results are illustrated in terms of some statistical indices, convergence history and solution quality. The design examples include three planar steel frames with small to large number of design variables. Results show that the ACSS outperforms the charged system search algorithm in this sizing optimization problem.
... ere are an increasing number of optimization problems regarding engineering design and industrial production calling for more effective optimization methods in the real world [1,2]. One efficient approach is the stochastic heuristic algorithm, which can be divided into three categories [3]: evolution-based algorithms [4,5], physical-phenomenabased algorithms [6,7] and swarm-based algorithms [8,9]. Differential evolution (DE) [10], one of the most excellent heuristic algorithms, has been widely used to solve complex optimization problems such as mechanical engineering [11], optimal power flow [12], parameter optimization [13], neural network training [14,15], and complex control system [16]. ...
Article
Full-text available
In this paper, we propose IJADE-TSO, a novel hybrid algorithm in which an improved adaptive differential evolution with optional external archive (IJADE) has been combined with the tuna swarm optimization (TSO). The proposed algorithm incorporates the spiral foraging search and parabolic foraging search of TSO into the mutation strategy in IJADE to improve the exploration ability and population diversity. Additionally, to enhance the convergence efficiency, crossover factor (CR) ranking, CR repairing, top α r1 selection, and population linear reduction strategies have been included in the algorithm. To evaluate the superiority of the proposed algorithm, IJADE-TSO has been benchmarked with its state-of-the-art counterparts using the CEC 2014 test set. Finally, to check the validity of IJADE-TSO, we apply it to photovoltaic (PV) parameter identification and compare its performance with those of other recently developed well-known algorithms. The statistical results reveal that IJADE-TSO outperforms the other compared algorithms.
... One of the applications of derivative-free optimization (DFO) is to construct optimization algorithms that do not employ first-order information within the algorithm. Recently, substantial progress has been made regarding their applications and numerical implementations (see [1,2,6,8,14,17]). ...
Article
Full-text available
An explicit formula based on matrix algebra to approximate the diagonal entries of a Hessian matrix with any number of sample points is introduced. When the derivative-free technique called generalized centered simplex gradient is used to approximate the gradient, then the formula can be computed for only one additional function evaluation. An error bound is introduced and provides information on the form of the sample set of points that should be used to approximate the diagonal of a Hessian matrix. If the sample set of points is built in a specific manner, it is shown that the technique is a O(ΔS2)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {O}({{\Delta }_{S}^{2}})$\end{document} accurate approximation of the diagonal entries of the Hessian matrix, where ΔS is the radius of the sample set.
... The ensuing optimization problems have been solved using a large variety of approaches, also including non-gradient-based metaheuristic methods such as genetic algorithms [23], particle swarm optimization [24,25], differential evolution algorithm [26], bat algorithm [27], harmony search algorithm [28,29]. In this regard, however, it should be highlighted that, if the objective function of an optimization problem is differentiable and gradient information is reliable, then gradient-based optimization algorithms are generally deemed as most effective [30][31][32]. ...
Article
The optimum design of a recent tuned mass damper with pinched hysteresis (TMD-PH) for seismic protection of structures is here investigated. The considered device consists of mixed NiTiNOL-steel wire ropes undergoing combined tension-flexure cycles. Previous studies demonstrated that concurrent interwire friction and phase transformation of the shape memory material exhibited by these composite wire ropes generate a peculiar pinched hysteretic behavior, which can be well described by means of a modified Bouc–Wen model. A stochastic-based optimum design methodology is here proposed for such TMD-PH, which is tailored to determine the best design parameters for the seismic protection of general structural systems. The seismic ground motion is modeled as a filtered nonstationary random process whereas the inherent pinched hysteresis of the device is dealt with via the stochastic linearization technique. The expressions of the linearization coefficients are obtained ad hoc and the optimum design of the TMD-PH is formulated as box-constrained single-objective optimization problem. Three objective function formulations are considered to reflect alternative design philosophies, namely, displacement-, acceleration- and energy-based design strategies. Unlike most literature studies addressing the optimum design of vibration absorbers via finite difference-based numerical schemes or non-gradient-based metaheuristic methods, the search for the optimal parameters of the TMD-PH is here performed by means of an iterative semi-analytical gradient-based numerical technique in which the derivatives for the covariance matrix of the system response are calculated analytically. A comprehensive numerical campaign is finally presented. First, a parametric study is carried out to evaluate the effects of the protected system characteristics and those of the seismic ground motion duration on the optimal parameters and performance of the device under consideration. The performance of the TMD-PH optimized according to the proposed methodology is also assessed in the context of a practical case study which makes use of synthetic and natural seismic ground motion records.
... The gradient-based optimization methods are efficient where the objective function is smooth and gradient information is reliable. But in some TO problems, for example when simulations are employed to imitate problem conditions, gradient information is not available or not easy to compute (Hare et al., 2013). Also, incorrect evaluation of sensitivity can lead to non-stability of TO and results come with chess patterns (Mei and Wang, 2006). ...
Article
In this paper, a new non-gradient-based topology optimization (TO) method proposed. Simulated annealing (SA) with crystallization factor used to generate new solutions. During this process, the newly generated solutions evaluated based on the SA concept. A density filter also applied to remove the discontinuity of shapes. The innovation of this method is applying the history of accepted or rejected solutions by the crystallization factor. Results of compliance minimization of cantilever and MBB-beams from the proposed method compared with the results of gradient-based methods. The main advantage of the proposed method is the improvement of convergence of the results as well as no need for the derivative of the objective function.
... Metaheuristic algorithms are usually classified into three categories [14]: evolution-based algorithms, physical-based algorithms, and swarm-based algorithms. e evolutionbased algorithm is inspired by the laws of evolution in nature. ...
Article
Full-text available
In this paper, a novel swarm-based metaheuristic algorithm is proposed, which is called tuna swarm optimization (TSO). The main inspiration for TSO is based on the cooperative foraging behavior of tuna swarm. The work mimics two foraging behaviors of tuna swarm, including spiral foraging and parabolic foraging, for developing an effective metaheuristic algorithm. The performance of TSO is evaluated by comparison with other metaheuristics on a set of benchmark functions and several real engineering problems. Sensitivity, scalability, robustness, and convergence analyses were used and combined with the Wilcoxon rank-sum test and Friedman test. The simulation results show that TSO performs better compared to other comparative algorithms.
... With their significant attention and applications, a large number of other optimization algorithms have been developed and applied to a range of fields with success [9][10][11][12][13][14][15]. These algorithms are roughly divided into three classes [16]: evolution-based (EB) [17], physics-based (PB) [18], and swarm-based (SB) algorithms [19]. ...
Chapter
This chapter introduces the inspiration, rules, and framework of the artificial ecosystem-based optimization (AEO) algorithm. The optimization performance is investigated from different aspects using unimodal, multimodal, low-dimension, and composite benchmark functions. The analytical results prove the superiority of the AEO algorithm.
Article
This paper presents a two-stage hybrid optimization algorithm based on a modified genetic algorithm. In the first stage, a global search is carried out over the design search space using a modified GA. The proposed modifications on the basic GA includes dynamically changing the population size throughout the GA process and the use of different forms of the penalty function in constraint handling. In the second stage, a local search based on the genetic algorithm solution is executed using a discretized form of Hooke and Jeeves method. The hybrid algorithm and the modifications to the basic genetic algorithm are examined on the design optimization of reinforced concrete flat slab buildings. The objective function is the total cost of the structure including the cost of concrete, formwork, reinforcement and foundation excavation. The constraints are defined according to the British Standard BS8110 for reinforced concrete structures. Comparative studies are presented to study the effect of different parameters of handling genetic algorithm on the optimized flat slab building. It has been shown that the proposed hybrid algorithm can improve genetic algorithm solutions at the expense of more function evaluations. (c) 2004 Elsevier Ltd. All rights reserved.
Conference Paper
Water distribution networks are a significant investment. As such, a large volume of research has examined the pipe design/rehabilitation problem and is summarized in other papers. This paper focuses on the application of a new optimization method to the pipe sizing problem. In recent years, the researchers have attempted to exploit expanding computer power and combined new optimization techniques with hydraulic simulation software. The computer model in this work, SFLANET, is based upon the shuffled frog leaping algorithm (SFLA), a memetic algorithm (a kind of meta-heuristic). The optimization algorithm is linked to EPANET via the EPANET Toolkit and can be used to design large, complex pipe network systems. Here results are shown for the New York City Tunnel problem.