ChapterPDF Available

Introduction to Optimization

Authors:
  • MIT World Peace University
  • Machine Intelligence Research Labs (MIR Labs), Auburn, WA, United States

Abstract and Figures

For almost all the human activities there is a desire to deliver the most with the least. For example in the business point of view maximum profit is desired from least investment; maximum number of crop yield is desired with minimum investment on fertilizers; maximizing the strength, longevity, efficiency, utilization with minimum initial investment and operational cost of various household as well as industrial equipments and machineries. To set a record in a race, for example, the aim is to do the fastest (shortest time).
Content may be subject to copyright.
Chapter 1
Introduction to Optimization
1.1 What Is Optimization?
For almost all the human activities there is a desire to deliver the most with the
least. For example in the business point of view maximum prot is desired from
least investment; maximum number of crop yield is desired with minimum
investment on fertilizers; maximizing the strength, longevity, efciency, utilization
with minimum initial investment and operational cost of various household as well
as industrial equipments and machineries. To set a record in a race, for example, the
aim is to do the fastest (shortest time).
The concept of optimization has great signicance in both human affairs and the
laws of nature which is the inherent characteristic to achieve the best or most
favorable (minimum or maximum) from a given situation [1]. In addition, as the
element of design is present in all elds of human activity, all aspects of opti-
mization can be viewed and studied as design optimization without any loss of
generality. This makes it clear that the study of design optimization can help not
only in the human activity of creating optimum design of products, processes and
systems, but also in the understanding and analysis of mathematical/physical
phenomenon and in the solution of mathematical problems. The constraints are
inherent part if the real world problems and they have to be satised to ensure the
acceptability of the solution. There are always numerous requirements and con-
straints imposed on the designs of components, products, processes or systems in
real-life engineering practice, just as in all other elds of design activity. Therefore,
creating a feasible design under all these diverse requirements/constraints is already
a difcult task, and to ensure that the feasible design created is also the bestis
even more difcult.
©Springer International Publishing Switzerland 2017
A.J. Kulkarni et al., Cohort Intelligence: A Socio-inspired Optimization Method,
Intelligent Systems Reference Library 114, DOI 10.1007/978-3-319-44254-9_1
1
1.1.1 General Problem Statement
All the optimal design problems can be expressed in a standard general form stated
as follows:
Minimize objective function fXðÞ
Subject to
ð1:1Þ
snumber of inequality constraints gjXðÞ0;j¼1;2;...;sð1:2Þ
wnumber of equality constraints hjXðÞ¼0;j¼1;2;...;wð1:3Þ
where the number of
design variables is given by xi;i¼1;2;...;n
or by design variable vector X¼
x1
x2
.
.
.
xn
8
>
>
<
>
>
:
9
>
>
=
>
>
;
A problem where the objective function is to be maximized (instead of mini-
mized) can also be handled with this standard problem statement since maxi-
mization of a function fXðÞis the same as minimizing the negative of fXðÞ.
Similarly, the type of inequality constraints can be treated by reversing the
sign of the constraint function to form the type of inequality.
Sometimes there may be simple limits on the allowable range of value a design
variable can take, and these are known as side constraints:
xl
ixixu
i
where xl
iand xu
iare the lower and upper limits of xi, respectively. However,
these side constraints can be easily converted into the normal inequality con-
straints (by splitting them into 2 inequality constraints).
Although all optimal design problems can be expressed in the above standard
form, some categories of problems may be expressed in alternative specialized
forms for greater convenience and efciency.
2 1 Introduction to Optimization
1.1.2 Active/Inactive/Violated Constraints
The constraints in an optimal design problem restrict the entire design space into
smaller subset known as the feasible region, i.e. not every point in the design space
is feasible. See Fig. 1.1.
An inequality constraint gjXðÞis said to be violated at the point x if it is not
satised there gjXðÞ0

.
If gjXðÞis strictly satised gjXðÞ\0

then it is said to be inactive at x.
If gjXðÞis satised at equality gjXðÞ¼0

then it is said to be active at x.
The set of points at which an inequality constraint is active forms a constraint
boundary which separates the feasibility region of points from the infeasible
region.
Based on the above denitions, equality constraints can only be either violated
hjXðÞ0

or active hjXðÞ¼0

at any point x.
The set of points where an equality constraint is active forms a sort of boundary
both sides of which are infeasible.
1.1.3 Global and Local Minimum Points
Let the set of design variables that give rise to a minimum of the objective function
fXðÞbe denoted by X(the asterisk is used to indicate quantities and terms
referring to an optimum point). An objective GXðÞis at its global (or absolute)
minimum at the point Xif:
fX
ðÞfXðÞ for all Xin the feasible region
x
1
g
3
(x)=0
g
1
(x)=0
g
2
(x)=0
x
2
x
a
x
c
x
b
x
1
h
1
(x)=0
x
2
x
a
x
b
Fig. 1.1 Active/Inactive/Violated constraints
1.1 What Is Optimization? 3
The objective has a local (or relative) minimum at the point Xif:
fX
ðÞfXðÞ for all feasible X
within a small neighborhood of X
A graphical representation of these concepts is shown in Fig. 1.2 for the case of
a single variable xover a closed feasible region axb.
1.2 Contemporary Optimization Approaches
There are several mathematical optimization techniques being practiced so far, for
example gradient methods, Integer Programming, Branch and Bound, Simplex
algorithm, dynamic programming, etc. These techniques can efciently solve the
problems with limited size. Also, they could be more applicable to solve linear
problems. In addition, as the number of variables and constraints increase, the
computational time to solve the problem, may increase exponentially. This may
limit their applicability. Furthermore, as the complexity of the problem domain is
increasing solving such complex problems using the mathematical optimization
techniques is becoming more and more cumbersome. In addition, certain heuristics
have been developed to solve specic problem with certain size. Such heuristics
have very limited exibility to solve different class of problems.
In past few years a number of nature-/bio-inspired optimization techniques (also
referred to as metaheuristics) such as Evolutionary Algorithms (EAs), Swarm
Intelligence (SI), etc. have been developed. The EA such as Genetic Algorithm
(GA) works on the principle of Darwinian theory of survival of the ttest individual
x
f(x)
constraint boundary
local minimum
constraint boundary
global minimum
local maximum
global maximum
b
alocal maximum
local minimum
Fig. 1.2 Minimum and maximum points
4 1 Introduction to Optimization
in the population. The population is evolved using the operators such as selection,
crossover, mutation, etc. According to Deb [2] and Ray et al. [3], GA can often
reach very close to the global optimal solution and necessitates local improvement
techniques to incorporate into it. Similar to GA, mutation driven approach of
Differential Evolution (DE) was proposed by Storn and Price [4] which helps
explore and further locally exploit the solution space to reach the global optimum.
Although, easy to implement, there are several problem dependent parameters
required to be tuned and may also require several associated trials to be performed.
Inspired from social behavior of living organisms such as insects, shes, etc.
which can communicate with one another either directly or indirectly the paradigm
of SI is a decentralized self organizing optimization approach. These algorithms
work on the cooperating behavior of the organisms rather than competition amongst
them. In SI, every individual evolves itself by sharing the information from others
in the society. The techniques such as Particle Swarm Optimization (PSO) is
inspired from the social behavior of bird ocking and school of sh searching for
food [4]. The shes or birds are considered as particles in the solution space
searching for the local as well as global optimum points. The directions of
movements of these particles are decided by the best particle in the neighborhood
and the best particle in entire swarm. The Ant Colony Optimization (ACO) works
on the antssocial behavior of foraging food following a shortest path [5]. The ant is
considered as an agent of the colony. It searches for the better solution in its close
neighborhood and iteratively updates its solution. The ants also updates their
pheromone trails at the end of every iteration. This helps every ant decide their
directions which may further self organize them to reach to the global optimum.
Similar to ACO, the Bee Algorithm (BA) also works on the social behavior of
honey bees nding the food; however, the bee colony tends to optimize the use of
number of members involved in particular pre-decided tasks [6]. The Bees
Algorithm is a population-based search algorithm proposed by Pham et al. [7]ina
technical report presented at the Cardiff University, UK. It basically mimics the
food foraging behavior of honey bees. According to Pham and Castellani [8] and
Pham et al. [7], Bees Algorithm mimics the foraging strategy of honey bees which
look for the best solution. Each candidate solution is thought of as a ower or a
food source, and a population or colony of nbees is used to search the problem
solution space. Each time an articial bee visits a solution, it evaluates its objective
solution. Even though it has been proven to be effective solving continuous as well
as combinatorial problems Pham and Castellani [8,9], some measure of the
topological distance between the solutions is required. The Firey Algorithm
(FA) is an emerging metaheuristic swarm optimization technique based on the
natural behavior of reies. The natural behavior of reies is based on biolumi-
nescence phenomenon [10,11]. They produce short and rhythmic ashes to
communicate with other reies and attract potential prey. The light
intensity/brightness Iof the ash at a distance robeys inverse square law, i.e.
I/1r2in addition to the light absorption by surrounding air. This makes most of
1.2 Contemporary Optimization Approaches 5
the reies visible only till a limited distance, usually several hundred meters at
night, which is enough to communicate. The ashing light of reies can be for-
mulated in such a way that it is associated with the objective function to be opti-
mized, which makes it possible to formulate optimization algorithms [10,11].
Similar to the other metaheuristic algorithms constraint handling is one of crucial
issues being addressed by researchers [12].
1.3 Socio-Inspired Optimization Domain
Every society is a collection of self interested individuals. Every individual has a
desire to improve itself. The improvement is possible through learning from one
another. Furthermore, the learning is achieved through interaction as well as
competition with the individuals. It is important to mention here that this learning
may lead to quick improvement in the individuals behavior; however, it is also
possible that for certain individuals the learning and further improvement is slower.
This is because the learning and associated improvement depend upon the quality
of the individual being followed. In the context of optimization (minimization and
maximization) if the individual solution being followed is better, the chances of
improving the follower individual solution increases. Due to uncertainty, this is also
possible that the individual solution being followed may be of inferior quality as
compared to the follower candidate. This may make the follower individual solution
to reach a local optimum; however, due to inherent ability of societal individuals to
keep improving itself other individuals are also selected for learning. This may
make the individuals further jump out of the possible local optimum and reach the
global optimum solution. This common goal of improvement in the
behavior/solution reveals the self organizing behavior of the entire society. This is
an effective self organizing system which may help in solving a variety of complex
optimization problems.
The following chapters discuss an emerging Articial Intelligence
(AI) optimization technique referred to as Cohort Intelligence (CI). The framework
of CI along with its validation by solving several unconstrained test problems is
discussed in detail. In addition, numerous applications of CI methodology and its
modied versions in the domain of machine learning are provided. Moreover, the
CI application for solving several test cases of the combinatorial problems such as
Traveling Salesman Problem (TSP) and 01 Knapsack Problem are discussed.
Importantly, CI methodology solving real world combinatorial problems from the
healthcare and inventory problem domain, as well as complex and large sized
Cross-Border transportation problems is also discussed. These applications under-
score the importance of the Socio-inspired optimization method such as CI.
6 1 Introduction to Optimization
References
1. Kulkarni, A.J., Tai, K., Abraham, A.: Probability collectives: a distributed multi-agent system
approach for optimization. In: Intelligent Systems Reference Library, vol. 86. Springer, Berlin
(2015) (doi:10.1007/978-3-319-16000-9, ISBN: 978-3-319-15999-7)
2. Deb, K.: An efcient constraint handling method for genetic algorithms. Comput. Methods
Appl. Mech. Eng. 186, 311338 (2000)
3. Ray, T., Tai, K., Seow, K.C.: Multiobjective design optimization by an evolutionary
algorithm. Eng. Optim. 33(4), 399424 (2001)
4. Storn, R., Price, K.: Differential evolutiona simple and efcient heuristic for global
optimization over continuous spaces. J. Global Optim. 11, 341359 (1997)
5. Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of IEEE International
Conference on Neural Networks, pp. 19421948 (1995)
6. Dorigo, M., Birattari, M., Stitzle, T.: Ant colony optimization: articial ants as a
computational intelligence technique. IEEE Comput. Intell. Mag., 2839 (2006)
7. Pham, D.T., Ghanbarzadeh, A., Koc, E., Otri, S., Rahim, S., Zaidi, M.: The bees algorithm.
Technical Note, Manufacturing Engineering Centre, Cardiff University, UK (2005)
8. Pham, D.T., Castellani, M.: The bees algorithmmodelling foraging behaviour to solve
continuous optimisation problems. Proc. ImechE, Part C, 223(12), 29192938 (2009)
9. Pham, D.T., Castellani, M.: Benchmarking and comparison of nature-inspired
population-based continuous optimisation algorithms. Soft Comput. 133 (2013)
10. Yang, X.S.: Firey algorithms for multimodal optimization. In: Stochastic Algorithms:
Foundations and Applications. Lecture Notes in Computer Sciences 5792, pp. 169178.
Springer, Berlin (2009)
11. Yang, X.S., Hosseini, S.S.S., Gandomi, A.H.: Firey Algorithm for solving non-convex
economic dispatch problems with valve loading effect. Appl. Soft Comput. 12(3), 11801186
(2002)
12. Deshpande, A.M., Phatnani, G.M., Kulkarni, A.J.: Constraint handling in rey algorithm. In:
Proceedings of IEEE International Conference on Cybernetics, pp. 186190 (2013)
References 7
... Optimalisasi merupakan suatu tindakan, proses, atau metodologi untuk membuat sesuatu (sebagai sebuah desain, sistem, atau keputusan) menjadi lebih/sepenuhnya sempurna, fungsional, atau lebih efektif (Kamus Besar Bahasa Indonesia, 2014). Optimalisasi juga diartikan sebagai tindakan mencapai hasil terbaik yang mungkin dalam keadaan tertentu dan menyebabkan tercapainya tujuan (Kulkarni, 2017;Winardi, 1996). Sementara itu, menurut (Rahmawan, 2012) optimalisasi adalah pencarian nilai terbaik dari yang tersedia dari beberapa fungsi yang diberikan pada suatu konteks. ...
... Optimalisasi peran dan fungsi PSM dalam penelitian ini diartikan sebagai upaya agar peran dan fungsi PSM dapat dilaksanakan secara optimal sehingga tujuan pemberdayaan masyarakat dapat tercapai lebih baik atau lebih sempurna (Kamus Besar Bahasa Indonesia, 2014;Rahmawan, 2012;Kulkarni, 2017 Sebagaimana hasil penelitian sebelumnya (Sutarto & Mulyono, 2018), menyimpulkan bahwa peran pendamping desa diharapkan dapat mempercepat ketertinggalan dan kesejahteraan masyarakat. Namun karena keberadaan pendamping desa belum dapat melayani semua desa yang jumlahnya sangat besar, tentunya peran dan fungsi PSM dapat melengkapi sebagai mitra yang juga mempunyai fungsi pendampingan. ...
Conference Paper
Full-text available
Most of the contemporary nature-/bio-inspired techniques are unconstrained algorithms. Their performance may get affected when dealing with the constrained problems. There are number of constraint handling techniques developed for these algorithms. This paper intends to compare the performance of the emerging metaheuristic swarm optimization technique of Firefly Algorithm when incorporated with the generalized constrained handling techniques such as penalty function method, feasibility-based rule and the combination of both, i.e. combined approach. Seven well known test problems have been solved. The results obtained using the three constraint handling techniques are compared and discussed with regard to the robustness, computational cost, rate of convergence, etc. The associated strengths, weaknesses and future research directions are also discussed.
Article
Full-text available
This paper describes an experimental investigation into four nature-inspired population-based continuous optimisation methods: the Bees Algorithm, Evolutionary Algorithms, Particle Swarm Optimisation, and the Artificial Bee Colony algorithm. The aim of the proposed study is to understand and compare the specific capabilities of each optimisation algorithm. For each algorithm, thirty-two configurations covering different combinations of operators and learning parameters were examined. In order to evaluate the optimisation procedures, twenty-five function minimisation benchmarks were designed by the authors. The proposed set of benchmarks includes many diverse fitness landscapes, and constitutes a contribution to the systematic study of optimisation techniques and operators. The experimental results highlight the strengths and weaknesses of the algorithms and configurations tested. The existence and extent of origin and alignment search biases related to the use of different recombination operators are highlighted. The analysis of the results reveals interesting regularities that help to identify some of the crucial issues in the choice and configuration of the search algorithms.
Article
Full-text available
The Bees Algorithm models the foraging behaviour of honeybees in order to solve optimization problems. The algorithm performs a kind of exploitative neighbourhood search combined with random explorative search. This article describes the Bees Algorithm in its basic formulation, and two recently introduced procedures that increase the speed and accuracy of the search. A critical review of the related swarm intelligence literature is presented. The effective-ness of the proposed method is compared to that of three state-of-the-art biologically inspired search methods. The four algorithms were tested on a range of well-known benchmark func-tion optimization problems of different degrees of complexity. The experimental results proved the reliability of the bees foraging metaphor. The Bees Algorithm performed optimally, or near optimally, in almost all the tests. Compared to the three control algorithms, the Bees Algorithm was highly competitive in terms of learning accuracy and speed. The experimental tests helped also to shed further light on the search mechanisms of the Bees Algorithm and the three control methods, and to highlight their differences, strengths, and weaknesses.
Article
Full-text available
A new heuristic approach for minimizing possiblynonlinear and non-differentiable continuous spacefunctions is presented. By means of an extensivetestbed it is demonstrated that the new methodconverges faster and with more certainty than manyother acclaimed global optimization methods. The newmethod requires few control variables, is robust, easyto use, and lends itself very well to parallelcomputation.
Article
A new heuristic approach for minimizing possiblynonlinear and non-differentiable continuous spacefunctions is presented. By means of an extensivetestbed it is demonstrated that the new methodconverges faster and with more certainty than manyother acclaimed global optimization methods. The newmethod requires few control variables, is robust, easyto use, and lends itself very well to parallelcomputation.
Article
This paper presents an evolutionary algorithm for generic multiobjective design optimization problems. The algorithm is based on nondominance of solutions in the objective and constraint space and uses effective mating strategies to improve solutions that are weak in either. Since the methodology is based on nondominance, scaling and aggregation affecting conventional penalty function methods for constraint handling does not arise. The algorithm incorporates intelligent partner selection for cooperative mating. The diversification strategy is based on niching which results in a wide spread of solutions in the parametric space. Results of the algorithm for the design examples clearly illustrate the efficiency of the algorithm in solving multidisciplinary design optimization problems.
Article
The introduction of ant colony optimization (ACO) and to survey its most notable applications are discussed. Ant colony optimization takes inspiration from the forging behavior of some ant species. These ants deposit Pheromone on the ground in order to mark some favorable path that should be followed by other members of the colony. The model proposed by Deneubourg and co-workers for explaining the foraging behavior of ants is the main source of inspiration for the development of ant colony optimization. In ACO a number of artificial ants build solutions to an optimization problem and exchange information on their quality through a communication scheme that is reminiscent of the one adopted by real ants. ACO algorithms is introduced and all ACO algorithms share the same idea and the ACO is formalized into a meta-heuristics for combinatorial problems. It is foreseeable that future research on ACO will focus more strongly on rich optimization problems that include stochasticity.