Content uploaded by Ali Asghar Heidari
Author content
All content in this area was uploaded by Ali Asghar Heidari on Jan 17, 2022
Content may be subject to copyright.
All codes and source files are available at https://aliasgharheidari.com/INFO.html
1
INFO: An Efficient
Optimization Algorithm based
on Weighted Mean of Vectors
Iman Ahmadianfara*, Ali Asghar Heidarib,c, Saeed Noshadiana,
Huiling Chend**, Amir H Gandomie
a Department of Civil Engineering, Behbahan Khatam Alanbia University of Technology,
Behbahan, Iran
Email: im.ahmadian@gmai.com, i.ahmadianfar@bkatu.ac.ir (Iman Ahmadianfar).
Saeed.noshadian@gmail.com (Saeed Noshadian).
b School of Surveying and Geospatial Engineering, College of Engineering, University of
Tehran, Tehran 1439957131, Iran
Email: as_heidari@ut.ac.ir, aliasghar68@gmail.com
c Department of Computer Science, School of Computing, National University of Singapore,
Singapore 117417, Singapore
Email: aliasgha@comp.nus.edu.sg, t0917038@u.nus.edu
d College of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou,
Zhejiang 325035, China
Email: chenhuiling.jlu@gmail.com
e Faculty of Engineering & Information Technology, University of Technology Sydney, NSW
2007, Australia
Email: gandomi@uts.edu.au
Corresponding Authors: Iman Ahmadianfar and Huiling Chen
E-mail: im.ahmadian@gmail.com (Iman Ahmadianfar), chenhuiling.jlu@gmail.com (Huiling
Chen)
All codes and source files are available at https://aliasgharheidari.com/INFO.html
2
Abstract
This study presents the analysis and principle of an innovative optimizer named weIghted
meaN oF vectOrs (INFO) to optimize different problems. INFO is a modified weight mean
method, whereby the weighted mean idea is employed for a solid structure and updating the
vectors’ position using three core procedures: updating rule, vector combining, and a local search.
The updating rule stage is based on a mean-based law and convergence acceleration to generate
new vectors. The vector combining stage creates a combination of obtained vectors with the
updating rule to achieve a promising solution. The updating rule and vector combining steps
were improved in INFO to increase the exploration and exploitation capacities. Moreover,
the local search stage helps this algorithm escape low-accuracy solutions and improve
exploitation and convergence. The performance of INFO was evaluated in 48 mathematical
test functions, and five constrained engineering test cases. According to the literature, the
results demonstrate that INFO outperforms other basic and advanced methods in terms of
exploration and exploitation. In the case of engineering problems, the results indicate that the
INFO can converged to 0.99% of the global optimum solution. Hence, the INFO algorithm
is a promising tool for optimal designs in optimization problems, which stems from the
considerable efficiency of this algorithm for optimizing constrained cases.
The source codes of this algorithm will be publicly available at https://imanahmadianfar.com.
and https://aliasgharheidari.com/INFO.html.
Keywords: Optimization; Swam-intelligence; Exploration; Exploitation; Weighted Mean of
Vectors Algorithm
1. Introduction
With the development of society, people will face more and more complex problems.
However, solving a class of complex problems is the essential requirement for promoting
social development. Although many traditional numerical and analytical methods have carried
out relevant analysis research, some deterministic methods cannot provide a fitting solution
to solve several challenging problems with non-convex and highly non-linear search domains
since the complexity and dimensions of these problems grow exponentially. Optimizing the
problems by applying some deterministic methods, such as the Lagrange and Simplex
methods, requires some initial information of the problem and complicated computations.
Thus, exploring global optimum solution problems using such methods for those levels of
problems is not always possible or feasible [1]. Therefore, it is still urgent to develop an
efficient method to solve the increasingly complex optimization problems. Actually,
optimization methods can have multiple forms and formulations, maybe no limit in form, and
what they essential for them in stochastic class is a core for exploration and a core for
exploitation, which can be utilized to deal with those forms of problems, such as multi-
objective optimization, fuzzy optimization, robust optimization, memetic optimization, large
scale optimization, many-objective optimization methods, and single-objective optimization.
One common optimization method, named swarm intelligence (SI) algorithms, is swarm-
based optimization based on the evolution of an initial set of agents and attraction of agents
towards better solutions, which, in an extreme case, is the optimum solution and avoids locally
optimal solutions. The swarm intelligence optimization algorithm has intelligent
characteristics such as self-adaptation, self-learning, and self-organization and is convenient
for large-scale parallel computing. It is a trendy optimization technology.
In recent years, some classes of swarm-based optimization algorithms have been
applied as simple and reliable methods for realizing the solutions of problems in both the
computer science field and industry. Numerous researchers have demonstrated that swarm-
All codes and source files are available at https://aliasgharheidari.com/INFO.html
3
based optimization is very promising for tackling many challenging problems [2-4]. Some
algorithms employ methods that mimic natural evolutionary mechanisms and basic genetic
rules, such as selection, reproduction, mutation, and migration [5]. One of the most popular
evolutionary methods is the genetic algorithm (GA) introduced by Holland [6]. With its unique
three core operations of crossover, variation, and selection, GA has achieved outstanding
performance in many optimization problems, such as twice continuously differentiable NLP
problems [7], predicting production, and neural architectures searching. Other well-regarded
evolutionary algorithms include differential evolution (DE) [8] and evolutionary strategies
(ES) [9]. This kind of evolutionary algorithm simulates the way of biological evolution in
nature and has strong adaptability to problems. Moreover, the rise of deep neural networks
(DNN) in recent years has made people pay more attention to how to design neural network
architecture automatically. Therefore, network architecture search (NAS) based on
evolutionary algorithms has become a hot topic [10]. Some methods are motivated by physical
laws, such as simulated annealing (SA) [11]. As one of the most well-known methods in this
family presented by Kirkpatrick et al. [11], SA simulates the annealing mechanism utilized in
physical material sciences. Also, with its excellent local search capabilities, SA can find more
potential solutions in many engineering problems than other traditional SI algorithms [12-14].
One of the latest well-established methods is the gradient-based optimizer (GBO)
1
, which
considers Newtonian logic to explore suitable regions and achieve the global solution [15].
The method has been applied in many fields, including feature selection[16] and parameter
estimation of photovoltaic models[16]. Most swarm methods mimic the equations of particle
swarm optimization (PSO) by varying the basis of inspiration around collective social
behaviors of animal groups [17]. Particle swarm optimization (PSO) is one of the most
successful algorithms in this class, which was inspired by birds' social and individual
intelligence when flocking [18]. In detail, PSO has a few parameters that need to be adjusted,
also, unlike other methods, PSO has a memory machine, and the knowledge of particles with
better performance can be preserved, which can help the algorithm find the optimal solution
more quickly. Currently, PSO has taken its place in the fields of large-scale optimisation
problems[19], feature selection[20], single-objective optimization problem[21], multi-
objective optimisation problems[22], and high-dimensional expensive problem[23]. Ant
colony optimization (ACO) is another popular approach based on ants' foraging behavior
[24]. In particular, the concept of pheromone is a major feature of ACO. According to
pheromone secreted by the ants in the process of searching for food, it can help the population
to find a better solution at a faster rate. As soon as ACO was proposed, it was applied to the
traveling salesman problem of 3-spots [25] and some complex optimization problems[26], and
achieved satisfactory results.
While these optimization methods can solve various challenging and real optimization
problems, the No Free Lunch (NFL) theorem authorizes researchers to present a new variant
of methods or a new optimizer from scratch [27]. This theory states that no optimization
method can work as the best tool for all problems. Accordingly, one algorithm can be the
most suitable approach to solve several problems but is incompetent for other optimization
problems. Hence, it can be declared that this theory is the basis of many studies in this field.
In this research, we were motivated to improve upon novel metaheuristic methods that suffer
from weak performance, have verification bias, and underperform compared to other existing
methods [28-30]. As such, the proposed INFO algorithm is a forward-thinking, innovative
attempt against such methods that provides a promising platform for the future of
1
https://imanahmadianfar.com
All codes and source files are available at https://aliasgharheidari.com/INFO.html
4
optimization literature in computer science. Furthermore, we aim to apply this method to a
variety of optimization problems and make it a scalable optimizer.
In this paper, we designed a new optimizer (INFO) by modifying the weighted mean
method and updating the vectors' position, which can help form a more robust structure. In
detail, updating rule, vector combining, and local search are the three core processes of INFO.
Unlike other methods, the updating rule based on the mean is used to generate new vectors
in INFO, thus accelerating the convergence speed. In the vector combination stage, two
vectors acquired in the vector update stage are combined to produce a new vector for
improving local search ability. This operation ensures the diversity of the population to a
certain extent. Taking into account the global optimal position and the mean-based rule, a
local operation is executed, which can effectively improve the problem of INPO being
vulnerable to local optimal. This work's primary goal was to introduce the above three core
processions for optimizing various kinds of optimization cases and engineering problems,
such as structural and mechanical engineering problems and water resources systems. The
INFO algorithm employs the concept of weighted mean to move agents toward a better
position. This main motive behind INFO emphasizes its performance aspects to potentially
solve some of the optimization problems that other methods cannot solve. It should be noted
that there is no inspiration part in INFO, and it is tried to move the field to go beyond the
metaphor.
The rest of this paper is organized as follows. In Sections 2 and 3, the main structures
of INFO are described in detail. The set of mathematical benchmark functions employed to
assess the efficiency of INFO is presented in Section 4. Section 5 solves four real engineering
problems to show the capability of the proposed algorithm. Lastly, Section 6 expresses the
conclusions of this study and gives some ideas for future researches.
2. Literature review
This section describes the previous studies on optimization methods and presents this
research's primary motivation. Generally, evolutionary algorithms are classified into two types:
single-based and population-based algorithms [17, 31]. In the first case, the algorithm's search
process begins with a single solution and updates its position during the optimization process.
The most well-known single-solution-based algorithms include simulated annealing (SA) [11],
tabu search (TS) [32], and hill-climbing [33]. These algorithms allow easy implementation and
require only a small number of function evaluations during optimization. However, the
disadvantages are the high possibility of trapping in local optima and failure to exchange
information because these methods have only a single trend.
Conversely, the optimization process in population-based algorithms begins with a
set of solutions and updates their position during optimization. GA, DE, PSO, artificial bee
colony (ABC) [34], ant colony optimization (ACO) [35-37], slime mould algorithm (SMA)
2
[38], and Harris hawks optimization (HHO)
3
[39-41] are some of the population-based
algorithms. These methods have a high capacity to escape local optimal solutions because they
use a set of solutions during optimization. Moreover, the exchange of information can be
shared between solutions, which helps them to better search in difficult search spaces.
However, these algorithms require a large number of function evaluations during optimization
and high computational costs.
According to the above discussion, the population-based algorithms are considered
more reliable and robust optimization methods than single-solution-based algorithms.
2
https://aliasgharheidari.com/SMA.html
3
https://aliasgharheidari.com/HHO.html
All codes and source files are available at https://aliasgharheidari.com/INFO.html
5
Generally, an algorithm's best formulation is explored by evaluating it on different types of
benchmark and engineering problems.
Ordinarily, optimizers employ one or more operators to perform two phases:
exploration and exploitation. An optimization algorithm requires a search mechanism to find
promising areas in the search space, which is done in the exploration phase. The exploitation
phase improves the local search ability and convergence speed to achieve promising areas.
The balance between these two phases is a challenging issue for any optimization algorithm.
According to previous studies, no precise rule has been established to distinguish the most
appropriate time to transit from exploration to exploitation due to the unexplored form of
search spaces and the random nature of this type of optimizer [17, 31]. Therefore, realizing
this issue is essential to design a robust and reliable optimization algorithm.
Considering the main challenges of creating a high-performance optimization
algorithm and all critical points highlighted in the literature above [42-44], we introduce an
efficient optimizer based on the concept of the weighted mean of vectors. By avoiding a basis
of nature inspiration, INFO offers a promising method to avoid and reduce the challenges of
other optimization algorithms, thus providing a strong step in the direction towards a
metaphor-free class of optimization algorithms.
3. Definition of weighted mean
The optimization algorithm introduced in this study is based on a weighted mean,
which demonstrates a unique location in an object or system [45]. A detailed definition of this
concept is subsequently provided.
3.1. Mathematical definition of weighted mean
The mean of a set of vectors is described as the average of their positions (xi), as
weighted by the fitness of a vector (wi) [45]. In fact, this concept is used due to its simplicity
and ease of implementation. Fig. 1 depicts the weighted mean of the set of solutions (vectors),
in which the solutions with bigger weights are more effective in calculating the weighted mean
of solutions.
The formulation of weighted mean (WM) is defined by Eq. (1) [45]:
1
1
N
ii
iN
i
i
xw
WM w
=
=
=
(1)
where N is the number of vectors.
To provide a better explanation, WM can be considered as two vectors, as shown in
Eq. (1.1) [45]:
1 1 2 2
12
x w x w
WM ww
+
=+
(1.1)
In this study, each vector's weight was calculated based on a wavelet function (WF)
[46, 47]. Generally, the wavelet is a useful tool for modeling seismic signals by compounding
translations and dilations of an oscillatory function (i.e., mother wavelet) with a finite period.
This function is employed to create effective fluctuations during the optimization process.
Fig. 2 displays the mother wavelet used in this study, which is defined as:
2
cos( ) exp( )
x
wx
= −
(2)
All codes and source files are available at https://aliasgharheidari.com/INFO.html
6
where
is a constant number called the dilation parameter.
Fig. 1. The weighted mean of a set of solutions
Fig. 2. Mother wavelet
Figs. 3a and 3b display three vectors, and the differential between them are shown in
Fig 3c. The weighted mean of vectors is calculated by Eq. (3):
1 1 2 2 1 3 3 2 3
1 2 3
( ) ( ) ( )w x x w x x w x x
WM w w w
− + − + −
=++
(3)
in which
12
1 1 2 ( ) ( )
cos(( ( ) ( )) ) exp( )
f x f x
w f x f x
−
= − +
(3.1)
13
2 1 3 ( ) ( )
cos(( ( ) ( )) ) exp( )
f x f x
w f x f x
−
= − +
(3.2)
23
3 2 3 ( ) ( )
cos(( ( ) ( )) ) exp( )
f x f x
w f x f x
−
= − +
(3.3)
All codes and source files are available at https://aliasgharheidari.com/INFO.html
7
where
()fx
denotes the fitness function of the vector
x
.
Fig. 3. The weighted mean of vectors for three vectors
4. Weighted mean of vectors (INFO) algorithm
The weighted mean of vectors algorithm (INFO) is a population-based optimization
algorithm that calculates the weighted mean for a set of vectors in the search space. In the
proposed algorithm, the population is comprised of a set of vectors that demonstrate possible
solutions. The INFO algorithm finds the optimal solution over several successive generations.
The three operators update the vectors' positions in each generation:
• Stage 1: Updating rule
• Stage 2: Vector combining
• Stage 3: Local search
Herein, the problem of minimizing the objective function is considered as an example.
4.1. Initialization stage
The INFO algorithm is comprised of a population of Np vector in D dimensional
search domain (
NplxxxX gDl
g
l
g
l
gjl ...,,2,1},...,,,{ ,2,1,, ==
). In this step, some control parameters
are introduced and defined for the INFO algorithm. There are two main parameters: weighted
mean factor and scaling factor .
Generally speaking, the scaling rate is used to amplify the obtained vector via the
updating rule operator, which is dependent on the size of the search domain. The
factor is
used to scale the weighted mean of vectors. Its value is specified based on the feasible search
space of problems and reduced according to an exponential formula. These two parameters do
not need to be adjusted by the user and change dynamically based on generation. The INFO
algorithm uses a simple method to generate the initial vectors called random generation.
4.2. Updating rule stage
In the INFO algorithm, the updating rule operator increases the population's diversity
during the search procedure. This operator uses the weighted mean of vectors in order to
All codes and source files are available at https://aliasgharheidari.com/INFO.html
8
create new vectors. Indeed, this operator distinguishes the INFO algorithm from other
algorithms and consists of two main parts. In the first part, a mean-based rule is extracted
from the weighted mean for a set of random vectors. The mean-based method begins from a
random initial solution and moves to the next solution using the weighted mean information
of a set of randomly selected vectors. The second part is convergence acceleration, which
enhances convergence speed and promotes the algorithm's performance to reach optimal
solutions.
In general, INFO first employs a set of selected randomly differential vectors to obtain
the weighted mean of vectors rather than move the current vector toward a better solution.
In this work, increasing the population's diversity is considered the MeanRule based on the
best, better, and worst solutions. It should be noted that the better solution is randomly
determined from the top 5 solutions (regarding the objective function value). Therefore, the
mean-based rule is conducted to the MeanRule, as defined in Eq. (4):
1 (1 ) 2
1,2,...,
gg
ll
MeanRule r WM r WM
l Np
= + −
=
(4)
1 1 2 2 1 3 3 2 3
1 2 3
( ) ( ) ( )
1,
1,2,...,
ga a a a a a
lw x x w x x w x x
WM rand
w w w
l Np
− + − + −
= +
+ + +
=
(4.1)
where
12
1 1 2 ( ) ( )
cos(( ( ) ( )) ) exp( )
aa
aa f x f x
w f x f x
−
= − + −
(4.2)
13
2 1 3 ( ) ( )
cos(( ( ) ( )) ) exp( )
aa
aa f x f x
w f x f x
−
= − + −
(4.3)
23
3 2 3 ( ) ( )
cos(( ( ) ( )) ) exp( )
aa
aa f x f x
w f x f x
−
= − + −
(4.4)
1 2 3
max( ( ), ( ), ( ))
a a a
f x f x f x
=
(4.5)
1 2 3
1 2 3
( ) ( ) ( )
2,
1,2,...,
gbs bt bs ws bt ws
l
w x x w x x w x x
WM rand
w w w
l Np
− + − + −
= +
+ + +
=
(4.6)
where
1( ) ( )
cos(( ( ) ( )) ) exp( )
bs bt
bs bt f x f x
w f x f x
−
= − + −
(4.7)
2( ) ( )
cos(( ( ) ( )) ) exp( )
bs ws
bs ws f x f x
w f x f x
−
= − + −
(4.8)
All codes and source files are available at https://aliasgharheidari.com/INFO.html
9
3( ) ( )
cos(( ( ) ( )) ) exp( )
bt ws
bt ws f x f x
w f x f x
−
= − + −
(4.9)
()
ws
fx
=
(4.10)
where
()fx
is the value of the objective function;
1 2 3a a a l
are different integers
randomly selected from the range [1, NP];
is a constant number and has a very small value;
randn is a normally distributed random value;
bs
x
,
bt
x
, and
ws
x
are the best, better, and worst
solutions among all vectors in the population for the gth generation, respectively. In fact, these
solutions are determined after sorting the solution at each iteration. r is a random number
within the range [0, 0.5]; and
1
w
,
2
w
, and
3
w
are three WFs to calculate the weighted mean
of vectors that help the proposed INFO algorithm to search in the solution space globally.
In fact, the WFs are used to vary the MeanRule space according to the wavelet theory,
which is considered for two reasons: (1) to assist the algorithm to explore the search space
more effectively and achieve better solutions by creating efficient oscillation during the
optimization procedure; and (2) to generate fine-tuning by controlling the dilation parameter
introduced in the WFs, which is used to adjust the amplitude of WF. In this study, the value
of the dilation parameter was varied using Eq. (4.10) during the optimization process. In Eq.
(5),
is the scale factor, and
can be changed based on an exponential function defined in
(5.1):
2rand
= −
(5)
2exp( 4 )
g
Maxg
= −
(5.1)
where Maxg is the maximum number of generations.
The convergence acceleration part (CA) is also added to the updating rule operator to
promote global search ability, using the best vector to move the current vector in a search
space. In the INFO algorithm, it is supposed that the best solution is the nearest solution to
global optima. In fact, CA helps vectors move in a better direction. The CA presented in Eq.
(6) is multiplied by a random number in the range [0,1] (
rand
) to ensure that each vector has
a different step size in each generation in INFO:
1
1
()
( ( ) ( ) )
bs a
bs a
xx
CA randn f x f x
−
= −+
(6)
where
randn
is a random number with a normal distribution.
Finally, the new vector is calculated using Equation (7):
gg
ll
z x MeanRule CA
= + +
(7)
An optimization algorithm should generally search globally to discover the search
domain's promising spaces (exploration phase). Accordingly, the proposed updating rule
based on
bs
x
,
bt
x
,
g
l
x
and
1
g
a
x
is defined using the following scheme:
All codes and source files are available at https://aliasgharheidari.com/INFO.html
10
(8)
where
g
l
z1
and
g
l
z2
are the new vectors in the gth generation; and
is the scaling rate of a vector,
as defined in Eq. (9). It should be noted that in Eq. (9),
can be changed based on an
exponential function defined in Eq. (9.1):
−= rand2
(9)
exp( )
g
cd
Maxg
= −
(9.1)
where c and d are constant numbers equal to 2 and 4, respectively. It is worth noting that for
large values of the parameter
, the current position tends to diverge from the weighted mean
of vectors (exploration search), while small values of this parameter force the current position
to move toward the weighted mean of vectors (exploitation search).
4.3. Vector combining stage
In this study, for enhancing the population’s diversity in INFO, the two vectors
calculated in the previous section (
g
l
z1
and
g
l
z2
) are combined with vector
g
l
x
regarding the
condition to generate the new vector
g
l
u
, according to Eq. (10). In fact, this
operator is used to promote the local search ability to provide a new and promising vector:
(10.1)
(10.2)
(10.3)
whereis the obtained vector using the vector combining in gth generation; and
is equal
to
0.05 randn
.
4.4. Local search stage
All codes and source files are available at https://aliasgharheidari.com/INFO.html
11
Effective local search ability can prevent INFO from deception and dropping into
locally optimal solutions. The local operator is considered using the global position (
g
best
x
) and
the mean-based rule defined in Eq. (11) to further promote the exploitation, search, and
convergence to global optima. According to this operator, a novel vector can be produced
around
g
best
x
, if r < 0.5, where rand is a random value in [0, 1]:
1
12
0.5
0.5
( ( ))
( ( ))
g g g
l bs bs a
g
l rnd bs rnd
if rand
if rand
u x randn MeanRule randn x x
else
u x randn MeanRule randn x x
end
end
= + + −
= + + −
(11.1)
(11.2)
in which
(1 ) ( (1 ) )
rnd avg bt bs
x x x x
= + − + −
(11.3)
3
()
3
ab
avg x x x
x++
=
(11.4)
where
denotes a random number in the range of (0, 1); and
rnd
x
is a new solution, which
combines the components of the solutions,
avg
x
,
bt
x
, and
bs
x
, randomly. This increases the
randomness nature of the proposed algorithm to better search in the solution space.
1
and
2
are two random numbers defined as:
(11.5)
20.5
1
rand if p
otherwise
=
(11.6)
where p denotes a random number in the range of (0, 1). The random numbers
1
and
2
can
increase the impact of the best position on the vector. Finally, the proposed INFO algorithm
is presented in Algorithm 1, and Fig. 4 depicts the flowchart of the proposed algorithm.
The calculation complexity of an optimization algorithm is used to assess the run-
time, which is determined based on the algorithm's structure. INFO’s computational
complexity depends on the number of vectors, the total number of iterations and the number
of objects and is calculated as follows:
( ) ( ( )) ( )O CMV O T N d O TNd= =
(12)
where N is the number of vectors (population size), T is maximum generations, and d is the
number of objects.
All codes and source files are available at https://aliasgharheidari.com/INFO.html
12
To demonstrate the potential of INFO to solve optimization problems, its capabilities
are described below:
• INFO generates and promotes a set of random vectors for a problem and inherently
has a high ability to explore and escape local optimal solutions to single-solution-based
algorithms.
• The proposed updating rule in the INFO mechanism uses the mean rule and
convergence acceleration part to find the search space's up-and-coming areas.
• The proposed vector combining operator can explore the search space to improve the
search capability and local optima avoidance.
• Adaptive parameters smoothly implement the transition from exploration to
exploitation.
• A complement strategy called a local search operator is used to promote the exploitation
and convergence speed further.
Algorithm 1. Pseudo-code of the INFO algorithm.
STEP 1. Initialization
Set parameters Np and Maxg
Produce an initial population
}...,,{ 000 NpiXXP =
Calculate the objective function value of each vector
NpiXf i...,,1),(0=
Determine the best vector
bs
x
STEP 2. for g = 1 to Maxg do
for i = 1 to Np do
Select randomly
icba
within the range [1, Np]
►Updating rule stage
Calculate the vectors
g
i
z1
and
g
i
z2
using Eq. (8)
►Vector combining stage
Calculate the vector
g
i
u
using Eq. (10)
► Local search stage
Calculate the local search operator using Eq. (11)
Calculate the objective function value
)( ,
gji
uf
if
)()( ,, gji
gji xfuf
then
gji
gji ux ,
1
,=
+
Otherwise
gji
gji xx ,
1
,=
+
end for
Update the best vector (
bs
x
(
end for
STEP 3. Return Vector
gjbest
x,
as the final solution
All codes and source files are available at https://aliasgharheidari.com/INFO.html
13
• A variable is used as the global best position to record an appropriate approximation
of the global optimum and promote it during optimization.
• Since the vectors can change their positions according to the best position generated
so far, this will tend toward the best regions of the search spaces during the
optimization.
The next sections verify the performance of INFO in several test functions and real
engineering problems.
Fig. 4. Flowchart of INFO
5. Results and discussion
To evaluate and confirm the efficiency of an optimization algorithm, several test
problems should be considered. Therefore, this work tested the INFO algorithm's
performance on 19 mathematical benchmark functions, 13 of which (f1-f7 and f8-f13) have been
widely utilized in previous studies [38, 39, 48] and have unimodal (UF) and multimodal (MF)
search spaces, respectively. Functions f14- f19 are composite functions that have also been
considered in several previous studies [17, 31]. In the challenging composite functions (CFs),
the global solution position is shifted to a random position, and the functions are rotated,
which occasionally places the global solution within infeasible space boundaries occasionally
and combines variants of the benchmark functions.
A detailed explanation of these functions is reported in Tables 2-4. INFO was
compared with the GWO, GSA, SCA, GA, PSO, and BA optimization algorithms. We
followed fair comparison standards. For all the optimizers, the population size and the total
number of iterations were set to 30 and 500. The values of the main parameters for all
algorithms are given in Table 1. It is pertinent to mention that all of the control parameters
were set based on their developer or within the range of the suggestions to achieve the best
All codes and source files are available at https://aliasgharheidari.com/INFO.html
14
efficiency of the optimizers. The parameter settings of GWO and SCA were obtained from
previous works [49]. The benchmark functions for each optimization algorithm were tested
30 times. Table 5 presents the average and standard deviation of the fitness functions for the
30 runs.
Table 1. Values of control parameters for all comparative algorithms
Algorithms
Values of parameters
GWO
Convergence constant a = [2, 0]
BA
A (loudness) = 0.5, r (plus rate) = 0.5,
= 0, = 2
GA
Crossover probability =0.8 , mutation
probability = 0.05
PSO
c1 = c2 = 1.5, w (inertial weight) linearly
decreased from 0.7 to 0.3
GSA
G0 (initial gravitational constant) = 100,
= 20
SCA
A = 2
INFO
= 2, = 4
Table 2. UF test problems
Function
Dimension
Range
30
[-100,100]
0
30
[-100,10]
0
30
[-100,100]
0
,
30
[-100,100]
0
30
[-30,30]
0
30
[-100,100]
0
,
30
[-1.28,1.28]
0
All codes and source files are available at https://aliasgharheidari.com/INFO.html
15
Table 3. MF test problems
Function
Dimension
Range
30
[-500,500]
-418.98295
30
[-5.12,5.12]
0
30
[-32,32]
0
30
[-600,600]
0
,,,
,,,
30
[-50,50]
0
,,,
30
[-50,50]
0
All codes and source files are available at https://aliasgharheidari.com/INFO.html
16
Table 4. CF test problems
Function
Dimension
Range
,,,, Sphere Function
σ1,σ2,σ3,…,σ10=1,1,1,…,1
λ1,λ2,λ3,…,λ10=5100
, 5 100
, 5 100
,…, 5 100
30
[-5,5]
0
,,,, Griewank’s Function
σ1,σ2,σ3,…,σ10=1,1,1,…,1
λ1,λ2,λ3,…,λ10=5100
, 5 100
, 5 100
,…, 5 100
30
[-5,5]
0
,,,, Griewank’s Function
σ1,σ2,σ3,…,σ10=1,1,1,…,1
λ1,λ2,λ3,…,λ10=1,1,1,…,1
30
[-5,5]
0
,Ackley’s Function
, Rasrigin’s Function
, Sphere’s Function
,Weierstras’s Function
, Griewank’s Function
σ1,σ2,σ3,…,σ10=1,2,1.5,1.5,1,1,1.5,1.5,2,2
λ1,λ2,λ3,…,λ10=2*5 32
, 5 32
,2*1,1, 2*5 100, 5100,
2*10,10, 2*5 60, 560,
30
[-5,5]
0
, Rasrigin’s Function
, Weierstras’s Function
, Griewank’s Function
,Ackley’s Function
, Sphere’s Function
σ1,σ2,σ3,…,σ10=1,1,1,…,1
λ1,λ2,λ3,…,λ10=1,1,10,10, 5 60, 560,
532, 532,
5100,
5100
30
[-5,5]
0
, Rasrigin’s Function
, Weierstras’s Function
, Griewank’s Function
,Ackley’s Function
, Sphere’s Function
σ1,σ2,σ3,…,σ10=1,1,1,…,1
λ1,λ2,λ3,…,λ10=1,1,10,10, 5 60, 560,
532, 532,
5100,
5100
30
[-5,5]
0
All codes and source files are available at https://aliasgharheidari.com/INFO.html
17
Table 5. Statistical results and comparison for test functions
Function
INFO
GWO
GSA
SCA
PSO
BA
GA
f1
Mean
2.59E-43
2.02E-27
2.49E-01
1.49E+00
2.43E-16
3.92E+00
1.74E-01
SD
1.04E-43
4.27E-27
2.19E-01
8.94E-01
1.09E-16
6.08E+00
2.66E-02
f2
Mean
3.23E-21
8.03E-17
8.36E-01
3.30E-01
1.26E-07
1.37E-02
1.68E-01
SD
2.29E-21
5.53E-17
2.56E-01
8.34E-02
1.90E-07
2.62E-02
6.61E-02
f3
Mean
6.46E-39
1.72E-05
1.22E+02
1.55E+03
8.55E+02
9.21E+03
3.50E-01
SD
2.98E-38
6.95E-05
4.21E+01
1.01E+03
2.15E+02
5.30E+03
1.76E-01
f4
Mean
8.28E-22
8.51E-07
1.51E+00
1.02E+01
7.10E+00
3.71E+01
3.57E-01
SD
4.49E-22
1.33E-06
2.22E-01
2.71E+00
2.54E+00
1.13E+01
2.81E-01
f5
Mean
2.47E+01
2.72E+01
2.62E+02
2.78E+02
4.01E+01
1.74E+04
2.78E+01
SD
7.45E-01
8.02E-01
1.51E+02
1.45E+02
2.06E+01
2.83E+04
2.33E+00
f6
Mean
1.54E-06
6.75E-01
2.53E-01
1.89E+00
3.73E+00
2.19E+01
1.60E-02
SD
3.93E-06
3.27E-01
1.79E-01
7.01E-01
4.08E+00
3.03E+01
8.51E-02
f7
Mean
1.62E-03
1.79E-03
1.56E+00
1.15E-01
8.73E-02
1.43E-01
8.55E-02
SD
1.34E-03
6.61E-04
1.12E+00
3.86E-02
3.45E-02
2.29E-01
1.95E-02
f8
Mean
-9.47E+03
-6.16E+03
-6.23E+03
-6.63E+03
-2.76E+03
-3.70E+03
-5.61E+03
SD
6.40E+02
8.55E+02
1.07E+03
6.69E+02
5.19E+02
2.73E+02
6.76E+02
f9
Mean
0.00E+00
3.02E+00
1.02E+02
9.27E+00
2.49E+01
3.60E+01
3.26E+00
SD
0.00E+00
4.52E+00
2.24E+01
3.17E+00
4.67E+00
4.00E+01
4.51E+00
f10
Mean
8.88E-16
1.03E-13
1.01E+00
9.59E-01
1.11E-08
1.65E+01
5.70E+00
SD
0.00E+00
1.82E-14
5.90E-01
5.92E-01
2.64E-09
7.10E+00
3.10E+00
f11
Mean
0.00E+00
2.97E-03
2.06E-02
9.71E-01
2.78E+01
1.03E+00
3.91E-01
SD
0.00E+00
6.45E-03
8.48E-03
7.43E-02
7.07E+00
3.91E-01
6.41E-01
f12
Mean
1.04E-02
5.93E-02
1.96E-02
1.64E+00
2.28E+00
8.65E+00
6.98E-01
SD
3.16E-02
9.16E-02
3.96E-02
1.63E+00
1.05E+00
7.77E+00
9.24E-01
f13
Mean
4.30E-02
6.59E-01
7.31E-02
6.81E+00
8.94E+00
5.57E+05
8.77E+00
SD
7.36E-02
3.18E-01
4.94E-02
7.71E+00
6.51E+00
1.92E+06
8.05E+00
f14
Mean
1.11E-04
6.42E+01
4.33E+01
5.56E-01
3.51E-02
2.29E+02
3.18E+01
SD
9.52E-05
7.66E+01
8.17E+01
1.23E-01
1.91E-01
7.53E+01
7.47E+01
f15
Mean
5.94E+01
2.25E+02
1.54E+02
2.58E+02
3.04E+02
2.98E+02
3.15E+02
SD
6.31E+01
1.27E+02
1.31E+02
1.45E+02
1.42E+02
8.53E+01
1.59E+02
f16
Mean
2.82E+01
5.27E+02
8.77E+01
2.18E+02
1.77E+02
1.68E+03
1.27E+02
SD
4.63E+01
2.92E+02
7.71E+01
1.19E+02
1.49E+02
5.78E+01
9.41E+01
f17
Mean
9.08E+02
9.52E+02
8.32E+02
9.48E+02
1.02E+03
5.37E+02
1.08E+03
SD
5.10E+00
2.70E+01
2.20E+01
3.88E+01
2.74E+01
7.75E+01
1.61E+02
f18
Mean
2.48E+02
3.04E+02
3.48E+02
4.28E+02
2.52E+02
5.35E+02
4.31E+02
SD
8.96E+01
1.41E+02
1.47E+02
7.85E+01
1.94E+02
8.02E+01
1.07E+02
f19
Mean
2.78E+02
4.69E+02
3.83E+02
8.27E+02
3.56E+02
7.04E+02
3.73E+02
SD
8.05E+01
1.04E+02
6.27E+01
1.03E+02
1.08E+02
1.49E+02
1.18E+02
All codes and source files are available at https://aliasgharheidari.com/INFO.html
18
5.1. Assessment of the exploitative behavior
In this section, the exploitation ability of INFO is investigated using the UFs.
Functions f1- f7 are unimodal and have one global solution. Table 5 reveals that INFO is very
promising and competitive with the comparative algorithms. Specifically, it was the best
method to optimize all functions in terms of the average objective function values for 30 runs.
The proposed algorithm can outperform the others on all functions according to standard
deviation values, except for GWO on function f5. Therefore, INFO can afford appropriate
exploitation search ability due to the embedded exploitation phase.
5.2. Assessment of the exploratory behavior
To inspect the exploration search capability of INFO in the study, MFs (f8-f13) were
used. These functions have many local optima solutions whose number rises exponentially
with the dimension of the problems and, thus, are suitable to verify optimizers' exploration
search ability. Table 5 presents the results of INFO and other optimization methods, revealing
that the proposed algorithm for MFs presents a very suitable exploration search ability.
Specifically, INFO outperformed all other algorithms in terms of the average of the objective
function and standard deviation values found for all the functions, except on the standard
deviations of functions f8 and f13. The presented results demonstrate that the INFO algorithm
has an excellent competency in exploration search.
5.3. Assessment of ability escaping from local optimum
In this section, the ability of INFO on CFs (f14-f19) is examined. These functions are
very challenging for optimizers because only an appropriate balance between exploitation and
exploration can escape local optimum solutions. Table 5 displays the results of all examined
algorithms on the CFs. It is evident that INFO achieved favorable results regarding the
average of objective function values for all functions and lower standard deviations for
functions f14-f17 compared to the other algorithms but failed on functions f18 and f19. This means
that the proposed algorithm has a suitable balance between exploitation and exploration,
preventing it from getting stuck in high local optimum solutions. This efficient performance
is due to the obtained vectors by using the updating rule and vector combining operators. The
updating rule provides two vectors to improve the local (exploitation) and global (exploration)
search in the search space (Eq. 8). Then, the vector combining operator combines them with
the current vector with a certain probability. This process helps the INFO algorithm explore
the search space on both the global and local scales with a suitable balance. Also, the local
operator makes the optimization process safe from trapping in local optima positions.
By employing the Friedman test [50], it was found that the INFO algorithm achieved
a top rank, followed by GWO and GSA, as seen in Table 6. This further verifies that INFO’s
performance is better than the other well-known optimizers.
All codes and source files are available at https://aliasgharheidari.com/INFO.html
19
Table 6. Mean rankings computed by Friedman test for test functions
Function
INFO
GWO
GSA
SCA
PSO
BA
GA
f1
1
2
3
7
5
4
6
f2
1
2
3
4
7
5
6
f3
1
2
5
7
4
3
6
f4
1
2
5
7
4
3
6
f5
1
2
4
7
5
3
6
f6
1
4
6
7
3
2
5
f7
1
2
4
6
7
3
5
f8
1
4
7
6
3
5
2
f9
1
2
5
6
7
3
4
f10
1
2
3
7
5
6
4
f11
1
2
7
6
3
4
5
f12
1
3
6
7
2
4
5
f13
1
3
6
7
2
5
4
f14
1
6
2
7
5
4
3
f15
1
3
6
5
2
7
4
f16
1
6
4
7
2
3
5
f17
3
5
6
1
2
7
4
f18
1
3
2
7
4
6
5
f19
1
5
4
7
2
6
3
Mean Rank
1.11
3.16
4.63
6.21
3.89
4.37
4.63
Final Rank
1
2
3
5
4
6
4
5.4. Investigation of convergence speed
In this paper, three metrics, including search history, trajectory curve, and convergence
rate, were considered to assess the INFO algorithm's convergence behavior. Accordingly, 8
different benchmark functions, i.e. f1, f3, f6, f7, f9, f10, f11, and f13, each with a dimension of 2, were
chosen. To solve these test functions, INFO used five solutions over 200 iterations.
The search history and trajectory curves of the five solutions in their first dimension
are depicted in Fig. 5. Generally, a low-density distribution illustrates the exploration, and a
high-density distribution represents the exploitation. According to this figure, the solutions'
distribution density demonstrates how INFO can search globally and locally in the solution
space, where the solutions have a high density in the region close to the global optima and
have a low density in the regions far from the global optima. Therefore, it can be concluded
that INFO can successfully explore promising regions in the solution space to explore the
best position.
All codes and source files are available at https://aliasgharheidari.com/INFO.html
21
Fig. 5. Search history, trajectory, average fitness, and convergence metrics
In the average fitness graph in Fig. 5, the varied history of the fitness of solutions in
INFO during the optimization process can be seen, where the average fitness curve suddenly
decreased in the early iterations. This behavior confirms the superior convergence speed and
accurate search capability of INFO.
The trajectory curves represent the global and local search behaviors of each
optimizer. Fig. 5 displays the trajectory graphs of five solutions for the first dimension,
revealing the high variation of curves in the initial generation. As the number of iterations
increased, the variation of the curves decreased. Since high and low variations indicate the
exploration and exploitation, respectively, it can be deduced that the INFO algorithm first
performs the exploration and then the exploitation.
All codes and source files are available at https://aliasgharheidari.com/INFO.html
22
The most crucial goal of each optimization algorithm is to obtain global optima, which
can be achieved by obtaining the convergence curves to visualize the algorithms' behaviors.
According to Fig. 5, the convergence variations of functions f1, f6, f9, f11, and f13 dropped rapidly
in the early iterations, demonstrating that INFO implemented the exploration search more
effectively than the exploration. In opposition, the convergence variations of functions f3, f7,
and f10 decreased relatively slow, indicating the better efficiency of INFO in the exploration
search than the exploitation.
Different variations of convergence graphs for the optimizers are displayed in Fig. 6,
which compares the convergence speeds of the INFO algorithms and other optimizers on
some of the benchmark functions. Accordingly, INFO can compete on the same level as the
other contestant algorithms with a very promising convergence speed.
As first reported by Berg et al. [51], sudden changes in the convergence curve during
the early optimization steps effectively reach the best solution. This behavior helps an
optimization method to better search the search space, whereby the optimization process
should be decreased to support exploitation search in the end stages of the optimization
process. According to this viewpoint, during the optimization procedure, the exploitation
phase assists in probing the search space, and then the solutions converge to the best solution
in the exploration phase.
As can be observed in Fig. 6, INFO presents two convergence modes for optimizing
the problems. The first mode is rapid convergence in the initial iterations, whereby the INFO
algorithm can reach a more accurate solution than the contestant algorithms. This high
convergence rate can be seen in f1, f2, f4, f10, and f13. The second convergence mode tends to
increase the convergence rate by increasing the number of iterations. This is owed to the
proposed adaptation method in INFO that aids it to explore appropriate areas of the search
domain in the primary iterations and improves the convergence speed after almost 100
iterations. Regarding functions f7, f15 to f19 as the most complex and challenging benchmark
functions, the optimization results of these problems indicate that INFO profits from a
suitable balance of exploitation and exploration that helps it achieve the global solution.
Consequently, these results demonstrate the efficient performance of INFO to optimize
complex problems.
All codes and source files are available at https://aliasgharheidari.com/INFO.html
24
5.5. Wall-clock time analysis of INFO
This section investigates the run-time of INFO compared with other methods on 13
benchmark functions. All optimization methods were run ten times on each test function
separately, and the results are reported in Table 7. From the table, the INFO optimization
process took a relatively long time due to the calculation of its two operators (i.e., vector
combining and local search stage). Nevertheless, INFO outperformed some methods, such
as GA and GSA. Generally, albeit with a relatively time-consuming run-time, INFO has
considerable advantages over the other methods.
Table 7 Comparison of the run-time of INFO and other methods
INFO
GA
GSA
GWO
PSO
SCA
BA
f1
4.70
5.84
8.46
1.30
0.64
1.31
1.56
f2
4.70
5.96
7.18
1.42
0.77
1.14
1.67
f3
7.91
9.34
10.47
4.72
4.51
2.50
5.42
f4
4.08
5.04
7.06
1.18
0.52
1.13
1.55
f5
4.24
5.51
7.23
1.38
0.66
1.41
1.73
f6
4.07
5.05
7.03
1.20
0.50
1.52
1.39
f7
4.77
5.90
7.66
1.87
1.18
2.40
2.09
f8
4.38
5.75
7.28
1.44
0.81
1.80
1.83
f9
4.33
5.15
7.08
1.38
0.62
1.61
1.82
f10
4.14
5.36
7.17
1.65
0.76
1.78
1.76
f11
4.56
5.75
7.55
1.44
0.88
2.20
2.22
f12
6.50
8.04
9.38
3.37
2.80
4.59
4.28
f13
6.44
8.24
9.07
3.34
2.79
4.86
4.05
5.6. Assessment of INFO on CEC-BC-2017 test functions
To further evaluate the INFO algorithm, widely-used and complicated CEC-BC-2017,
benchmark problems were utilized, including rotated and shifted unimodal, multimodal,
hybrid, and composite test functions [52]. The characteristics of these problems are available
in Appendix A (Table 8). The mathematical formulation of these test functions is also available
in the initial IEEE report. The proposed INFO algorithm was assessed against these
benchmark functions, and its results were compared to those of other well-known optimizers.
For all test functions, the dimension was equal to 10. The optimizers were running 30 times
with 1000 iterations for each test function. The control parameters for each optimizer were
the same as those considered in Section 5. Table 9 reports the results acquired by INFO and
the other algorithms, including each function's average and standard deviation over 30 runs.
According to Table 10, INFO takes the top Friedman mean rank, and the efficiency
of the INFO, PSO, and GWO are much better than the other optimizers. This performance
illustrates INFO’s capability to outperform other well-known optimizers and further indicates
that INFO can solve complex optimization problems.
All codes and source files are available at https://aliasgharheidari.com/INFO.html
25
Table 8. Properties of the CEC-BC-2017 test functions [52]
Type
No.
Functions
Global
Domain
Unimodal
Function
f1
Shifted and Rotated Bent Cigar Function
100
[-100,100]
f3
Shifted and Rotated Zakharov Function
300
[-100,100]
Multimodal
Functions
f4
Shifted and Rotated Rosenbrock’s Function
400
[-100,100]
f5
Shifted and Rotated Rastrigin’s Function
500
[-100,100]
f6
Shifted and Rotated Expanded Scaffer’s Function
600
[-100,100]
f7
Shifted and Rotated Lunacek Bi_Rastrigin Function
700
[-100,100]
f8
Shifted and Rotated Non-Continuous Rastrigin’s
Function
800
[-100,100]
f9
Shifted and Rotated Levy Function
900
[-100,100]
f10
Shifted and Rotated Schwefel’s Function
1000
[-100,100]
Hybrid
Functions
f11
Hybrid Function of Zakharov, Rosenbrock and
Rastrigin’s
1100
[-100,100]
f12
Hybrid Function of High Conditioned Elliptic,
Modified Schwefel and Bent Cigar
1200
[-100,100]
f13
Hybrid Function of Bent Cigar, Rosenbrock and
Lunache Bi-Rastrigin
1300
[-100,100]
f14
Hybrid Function of Eliptic, Ackley, Schaffer and
Rastrigin
1400
[-100,100]
f15
Hybrid Function of Bent Cigar, HGBat, Rastrigin and
Rosenbrock
1500
[-100,100]
f16
Hybrid Function of Expanded Schaffer, HGBat,
Rosenbrock and Modified Schwefel
1600
[-100,100]
f17
Hybrid Function of Katsuura, Ackley, Expanded
Griewank plus Rosenbrock, Modifed Schwefel and
Rastrigin
1700
[-100,100]
f18
Hybrid Function of high conditioned Elliptic, Ackley,
Rastrigin, HGBat and Discus
1800
[-100,100]
f19
Hybrid Function of Bent Cigar, Rastrigin, Expanded
Grienwank plus Rosenbrock,
Weierstrass and expanded Schaffer
1900
[-100,100]
f20
Hybrid Function of Happycat, Katsuura, Ackley,
Rastrigin, Modified Schwefel and
Schaffer
2000
[-100,100]
Composition
Functions
f21
Composition Function of Rosenbrock, High
Conditioned Elliptic and Rastrigin
2100
[-100,100]
f22
Composition Function of Rastrigin’s, Griewank’s and
Modified Schwefel's
2200
[-100,100]
f23
Composition Function of Rosenbrock, Ackley,
Modified Schwefel and Rastrigin
2300
[-100,100]
f24
Composition Function of Ackley, High Conditioned
Elliptic, Girewank and Rastrigin
2400
[-100,100]
f25
Composition Function of Rastrigin, Happycat, Ackley,
Discus and Rosenbrock
2500
[-100,100]
f26
Composition Function of Expanded Scaffer, Modified
Schwefel, Griewank,
Rosenbrock and Rastrigin
2600
[-100,100]
f27
Composition Function of HGBat, Rastrigin, Modified
Schwefel, Bent-Cigar, High
Conditioned Elliptic and Expanded Scaffer
2700
[-100,100]
f28
Composition Function of Ackley, Griewank, Discus,
Rosenbrock, HappyCat, Expanded Scaffer
2800
[-100,100]
f29
Composition Function of shifted and rotated Rastrigin,
Expanded Scaffer and Lunacek Bi-Rastrigin
2900
[-100,100]
f30
Composition Function of shifted and rotated Rastrigin,
Non-Continuous Rastrigin and Levy Function
3000
[-100,100]
All codes and source files are available at https://aliasgharheidari.com/INFO.html
26
Table 9. Statistical results and comparison for CEC-BC- 2017 functions
Function
INFO
GWO
GSA
SCA
PSO
BA
GA
f1
Mean
1.00E+02
3.24E+07
3.89E+02
7.32E+08
1.87E+03
1.24E+09
1.64E+03
SD
2.39E-05
1.10E+08
4.21E+02
2.91E+08
2.46E+03
1.10E+09
1.30E+03
f3
Mean
3.00E+02
1.69E+03
1.06E+04
1.97E+03
3.00E+02
1.29E+04
2.30E+03
SD
2.04E-09
1.91E+03
1.96E+03
1.26E+03
4.60E-14
1.27E+04
1.31E+03
f4
Mean
4.00E+02
4.14E+02
4.06E+02
4.47E+02
4.05E+02
5.25E+02
4.09E+02
SD
5.33E-01
1.59E+01
5.58E-01
1.81E+01
1.21E+01
9.19E+01
1.75E+01
f5
Mean
5.12E+02
5.16E+02
5.58E+02
5.52E+02
5.36E+02
5.53E+02
5.40E+02
SD
6.08E+00
7.58E+00
1.10E+01
5.36E+00
1.34E+01
2.07E+01
1.27E+01
f6
Mean
6.00E+02
6.01E+02
6.23E+02
6.17E+02
6.07E+02
6.41E+02
6.20E+02
SD
6.65E-03
1.23E+00
8.57E+00
2.91E+00
5.05E+00
1.34E+01
1.06E+01
f7
Mean
7.24E+02
7.29E+02
7.15E+02
7.73E+02
7.24E+02
7.84E+02
7.49E+02
SD
6.79E+00
8.89E+00
2.58E+00
8.89E+00
6.36E+00
3.14E+01
1.86E+01
f8
Mean
8.12E+02
8.15E+02
8.20E+02
8.38E+02
8.21E+02
8.36E+02
8.21E+02
SD
5.13E+00
6.98E+00
4.14E+00
7.22E+00
9.81E+00
1.32E+01
1.03E+01
f9
Mean
9.00E+02
9.08E+02
9.00E+02
9.97E+02
9.00E+02
1.65E+03
1.05E+03
SD
7.77E-01
1.69E+01
0.00E+00
3.04E+01
4.72E-14
4.53E+02
1.02E+02
f10
Mean
1.64E+03
1.65E+03
2.80E+03
2.24E+03
1.92E+03
2.28E+03
2.03E+03
SD
2.40E+02
2.43E+02
3.38E+02
2.27E+02
2.21E+02
3.13E+02
3.19E+02
f11
Mean
1.11E+03
1.12E+03
1.14E+03
1.20E+03
1.14E+03
1.83E+03
1.15E+03
SD
8.72E+00
1.43E+01
1.16E+01
4.33E+01
1.73E+01
6.76E+02
3.87E+01
f12
Mean
2.78E+03
7.09E+05
7.61E+05
1.70E+07
1.57E+04
1.36E+06
9.90E+05
SD
1.80E+03
8.27E+05
4.90E+05
1.24E+07
1.08E+04
1.99E+06
1.06E+06
f13
Mean
1.44E+03
1.14E+04
1.14E+04
2.51E+04
9.80E+03
1.79E+04
9.37E+03
SD
1.23E+02
8.43E+03
2.36E+03
2.15E+04
7.18E+03
1.63E+04
5.33E+03
f14
Mean
1.43E+03
3.03E+03
6.78E+03
1.70E+03
1.73E+03
2.46E+03
3.81E+03
SD
1.02E+01
1.79E+03
2.01E+03
6.61E+02
4.41E+02
1.22E+03
2.19E+03
f15
Mean
1.52E+03
3.68E+03
2.00E+04
2.25E+03
2.35E+03
4.03E+04
3.68E+03
SD
1.72E+01
2.54E+03
4.62E+03
5.90E+02
1.42E+03
5.84E+04
2.09E+03
f16
Mean
1.65E+03
1.76E+03
2.16E+03
1.73E+03
1.85E+03
2.02E+03
1.93E+03
SD
5.73E+01
1.31E+02
1.19E+02
4.71E+01
7.58E+01
1.70E+02
1.10E+02
f17
Mean
1.72E+03
1.76E+03
1.84E+03
1.78E+03
1.77E+03
1.92E+03
1.76E+03
SD
1.50E+01
3.59E+01
1.22E+02
2.00E+01
3.03E+01
1.36E+02
2.95E+01
f18
Mean
1.86E+03
2.51E+04
8.37E+03
1.07E+05
9.60E+03
1.83E+04
1.21E+04
SD
4.13E+01
1.31E+04
4.13E+03
7.17E+04
8.67E+03
1.66E+04
1.11E+04
f19
Mean
1.91E+03
1.50E+04
4.28E+04
6.75E+03
3.00E+03
2.35E+04
7.47E+03
SD
7.06E+00
4.70E+04
1.92E+04
5.98E+03
1.77E+03
2.97E+04
4.58E+03
f20
Mean
2.01E+03
2.06E+03
2.29E+03
2.10E+03
2.08E+03
2.23E+03
2.14E+03
SD
1.22E+01
4.99E+01
9.60E+01
3.12E+01
4.47E+01
1.16E+02
7.08E+01
All codes and source files are available at https://aliasgharheidari.com/INFO.html
27
Table 9 (continued). Statistical results and comparison for CEC-BC- 2017 functions
Function
INFO
GWO
GSA
SCA
PSO
BA
GA
f21
Mean
2.28E+03
2.30E+03
2.36E+03
2.25E+03
2.30E+03
2.32E+03
2.32E+03
SD
5.39E+01
3.57E+01
2.22E+01
6.29E+01
5.82E+01
5.82E+01
4.36E+01
f22
Mean
2.30E+03
2.33E+03
2.30E+03
2.36E+03
2.30E+03
2.63E+03
2.31E+03
SD
1.65E+01
1.12E+02
6.85E-11
3.83E+01
9.27E-01
1.45E+02
1.03E+01
f23
Mean
2.62E+03
2.62E+03
2.75E+03
2.66E+03
2.69E+03
2.66E+03
2.72E+03
SD
7.28E+00
9.60E+00
5.60E+01
6.79E+00
3.96E+01
2.37E+01
4.77E+01
f24
Mean
2.75E+03
2.75E+03
2.55E+03
2.78E+03
2.73E+03
2.77E+03
2.81E+03
SD
7.87E+00
1.13E+01
1.17E+02
3.92E+01
1.32E+02
8.47E+01
1.38E+02
f25
Mean
2.92E+03
2.94E+03
2.94E+03
2.96E+03
2.92E+03
3.01E+03
2.93E+03
SD
3.07E+01
2.40E+01
1.36E+01
2.14E+01
2.29E+01
5.10E+01
2.31E+01
f26
Mean
3.11E+03
3.06E+03
3.70E+03
3.07E+03
3.07E+03
3.42E+03
3.55E+03
SD
3.71E+02
3.16E+02
6.96E+02
3.48E+01
2.85E+02
3.99E+02
4.82E+02
f27
Mean
3.09E+03
3.09E+03
3.26E+03
3.10E+03
3.16E+03
3.13E+03
3.23E+03
SD
1.61E+00
2.39E+00
3.95E+01
1.43E+00
4.39E+01
3.48E+01
4.36E+01
f28
Mean
3.30E+03
3.36E+03
3.46E+03
3.29E+03
3.17E+03
3.43E+03
3.29E+03
SD
1.66E+02
9.23E+01
3.08E+01
6.58E+01
4.16E+01
1.14E+02
1.58E+02
f29
Mean
3.17E+03
3.19E+03
3.45E+03
3.23E+03
3.23E+03
3.38E+03
3.28E+03
SD
3.11E+01
3.14E+01
1.42E+02
3.23E+01
4.08E+01
1.12E+02
6.89E+01
f30
Mean
8.55E+04
9.19E+05
9.83E+05
1.04E+06
9.80E+03
2.18E+06
4.45E+05
SD
2.49E+05
1.08E+06
2.61E+05
8.05E+05
5.69E+03
3.14E+06
1.01E+06
5.7. Ranking analysis of INFO
In this section, multiple statistical tests, including Friedman's [50], Bonferroni-Dunn's
[53], and Holm’s tests [54], are considered to evaluate the difference between the efficiency of
INFO and the other optimizers in the test functions. To implement a trustworthy comparison,
this research divided the test functions into three groups: G1 (group 1) includes unimodal,
multimodal, and composite functions (Tables 2-4); G2 (group 2) consists of CEC-BC-2017
test functions (Table 8), and G3 (group 3) is the combination of G1 and G2.
All codes and source files are available at https://aliasgharheidari.com/INFO.html
28
Table 10. Mean rankings computed by Friedman test for CEC-BC-2017 functions
Function
INFO
GWO
GSA
SCA
PSO
BA
GA
f1
1
5
2
6
4
7
3
f3
1.5
3
6
4
1.5
7
5
f4
1
5
3
6
2
7
4
f5
1
2
7
5
3
6
4
f6
1
2
6
4
3
7
5
f7
2.5
4
1
6
2.5
7
5
f8
1
2
3
7
4.5
6
4.5
f9
2
4
2
5
2
7
6
f10
1
2
7
5
3
6
4
f11
1
2
3.5
6
3.5
7
5
f12
1
3
4
7
2
6
5
f13
1
4.5
4.5
7
3
6
2
f14
1
5
7
2
3
4
6
f15
1
4.5
6
2
3
7
4.5
f16
1
3
7
2
4
6
5
f17
1
2.5
6
5
4
7
2.5
f18
1
6
2
7
3
5
4
f19
1
5
7
3
2
6
4
f20
1
2
7
4
3
6
5
f21
2
3.5
7
1
3.5
5.5
5.5
f22
2
5
2
6
2
7
4
f23
1.5
1.5
7
3.5
5
3.5
6
f24
3.5
3.5
1
6
2
5
7
f25
1.5
4.5
4.5
6
1.5
7
3
f26
4
1
7
2.5
2.5
5
6
f27
1.5
1.5
7
3
5
4
6
f28
4
5
7
2.5
1
6
2.5
f29
1
2
7
3.5
3.5
6
5
f30
2
4
5
6
1
7
3
Mean Rank
1.55
3.38
5.02
4.59
2.86
6.07
4.53
Final Rank
1
3
6
5
2
7
4
In the multiple statistical tests, the optimizers' results were first investigated to
determine their equality. When inequality was found, post-hoc analysis was performed to find
out which optimizer’s performance is significantly different from INFO. Therefore, the
Friedman test was conducted once again to obtain the optimizers' average ranks on the three
groups, as shown in Fig. 7.
All codes and source files are available at https://aliasgharheidari.com/INFO.html
29
Fig. 7. Bonferroni-Dunn test for all optimizers and different groups
The Bonferroni-Dunn test is a post-hoc analysis used to determine if the efficiency of
two optimizers is significantly different and if the difference between mean ranks of
optimization methods is larger than the critical difference (CD):
All codes and source files are available at https://aliasgharheidari.com/INFO.html
30
( )
1
6
mm
CD Q n
+
=
(13)
where is the critical value, calculated based on work in [55], and and are the numbers
of optimizers and test problems, respectively. In this work, INFO was introduced as the
control optimizer. In Fig. 8, the horizontal lines show CD as the threshold for the INFO
algorithm. For two common significant levels of 0.05 and 0.1, the threshold lines were
determined and are displayed in Fig. 8 as dashed and dotted lines, respectively. In the three
groups, INFO had the lowest mean ranks (G1 = 1.11, G2 = 1.55, G3 = 1.33) and, thus, can
outperform the other optimizers, which have mean ranks above the CD lines. It is pertinent
to note that the PSO rank is below the threshold line in G2.
However, the Bonferroni-Dunn test does not determine the main difference between
the optimizers if their mean ranks are less than the threshold line. Therefore, the present
research used Holm’s test to specify whether there is a substantial difference between the
optimizers, with ranks less than the threshold line. To implement Holm’s test, all optimizers
were sorted based on their p-value and were compared with , where is the algorithm
number. If the p-value is less than the corresponding significant level (), the optimizer is
significantly different. Tables 11 and 13 (G1 and G3) show the Bonferroni-Dunn test results
for levels 0.05 and 0.1, revealing a significant difference between the efficiency of INFO
and the other optimizers. For G2 (Table 12), the Bonferroni-Dunn test indicates no significant
difference between INFO and PSO, while Holm’s test demonstrates a significant difference
between these two. Consequently, in Fig. 8, the mean ranks of INFO in the three groups are
very close to each other, while the other optimizers have unstable performance in various
groups. Finally, it may be concluded that INFO has a reliable and accurate efficiency in all
groups compared to the other optimizers.
Table 11. Holm’s test for G1 test functions (INFO is as the control optimizer)
INFO VS.
Rank
Z-value
P-value
(.)
(.)
SCA
6.21
7.276
1.71E-13
0.00833
0.01667
GA
4.63
5.022
2.55E-07
0.01000
0.02000
GSA
4.63
5.022
2.55E-07
0.01250
0.02500
BA
4.37
4.651
1.65E-06
0.01667
0.03333
PSO
3.89
3.966
3.64E-05
0.02500
0.05000
GWO
3.16
2.924
1.72E-03
0.05000
0.10000
Table 12. Holm’s test for G2 test functions (INFO is as the control optimizer)
INFO VS.
Rank
Z-value
P-value
(.)
(.)
BA
6.069
7.967
7.77E-16
0.00833
0.01667
GSA
5.017
6.116
4.78E-10
0.01000
0.02000
SCA
4.586
5.358
4.19E-08
0.01250
0.02500
GA
4.534
5.252
7.48E-08
0.01667
0.03333
GWO
3.379
3.225
6.28E-04
0.02500
0.05000
PSO
2.862
2.309
1.04E-02
0.05000
0.10000
All codes and source files are available at https://aliasgharheidari.com/INFO.html
31
Table 13. Holm’s test for G3 test functions (INFO is as the control optimizer)
INFO VS.
Rank
Z-value
P-value
(.)
(.)
BA
5.40
5.807
3.18E-09
0.00833
0.01667
SCA
5.22
5.550
1.42E-08
0.01000
0.02000
GSA
4.82
4.979
3.18E-07
0.01250
0.02500
GA
4.58
4.637
1.76E-06
0.01667
0.03333
PSO
3.38
2.924
1.72E-03
0.02500
0.05000
GWO
3.27
2.767
2.82E-03
0.05000
0.10000
5.8. Performance comparison of INFO with advanced algorithms
In this section, the performance of INFO is compared with advanced algorithms,
including SCADE [56], CGSCA [57], OBLGWO [58], RDWOA [59], CCMWOA [60],
BMWOA [61], CLPSO [62], RCBA [63], and CBA [64] on the CEC-BC-2017 test functions.
For all the optimization algorithms, the population size and the total number of generations
were set to 30 and 500, respectively. To decrease the effect of random behavior in each
optimizer on the results, all optimizers were run 30 different times for each test function.
Table 14 indicates the average (AVG) and standard deviation results obtained by all
optimization methods, which confirm that INFO is a very competitive optimizer to solve
CEC-BC-2017 functions. In f1, f3-f19, f21, f23, f24, f26, f29, and f30, the AVG of INFO was smaller
than that of the other optimizers. These results show that INFO performed better on 24 out
of 29 test functions than other advanced algorithms.
Furthermore, the Wilcoxon Signed Ranks (WSR) test [65] was utilized to compare all
optimizers' overall efficiency on 30 independent runs. In the WSR, + indicates the sum of
ranks during all runs in which INFO outperformed the competitor algorithm. Comparatively,
- represents the sum of ranks during all runs in which the competitor algorithm
outperformed INFO. -value specifies the significance of the results in a statistical hypothesis
test ( = 0.05). The comparisons of the optimizers by the WSR test are reported in Table 15,
where the symbol ‘+’ shows that INFO has better efficiency than its competitor algorithm; ‘-
’ indicates that the competitor algorithm’s efficiency is better than INFO; and ‘=’ denotes
similar performance between INFO and the competitor algorithm. Each test's statistical
results for the 30 runs are presented in Table 16, which shows that the INFO algorithm can
perform impressively better than its competitors.
Fig. 8 also depicts the convergence curve of some CEC-BC-2017 test functions.
According to this figure, INFO can explore a superior solution at a fast convergence rate
compared with the other optimizers. Moreover, the Friedman test was utilized to calculate all
algorithms' average ranks, revealing that INFO has the best rank value (1.22) and performed
much better than the other optimizers (Table 17).
All codes and source files are available at https://aliasgharheidari.com/INFO.html
32
Table 14. Statistical results and comparison for CEC-BC- 2017 functions
Function
INFO
SCADE
CGSCA
OBLGWO
RDWOA
CCMWOA
BMWOA
CLPSO
RCBA
CBA
f1
Mean
1.32E+05
2.85E+10
2.53E+10
1.69E+08
1.01E+09
1.17E+10
1.23E+09
1.23E+10
7.87E+05
1.99E+06
SD
2.15E+05
3.80E+09
3.85E+09
8.23E+07
1.35E+09
3.83E+09
5.66E+08
2.54E+09
2.65E+05
2.17E+06
f3
Mean
2.09E+04
7.72E+04
7.06E+04
5.09E+04
6.27E+04
7.36E+04
8.10E+04
1.57E+05
9.56E+04
1.01E+05
SD
8.46E+03
5.86E+03
8.92E+03
8.86E+03
1.21E+04
5.19E+03
7.01E+03
2.14E+04
4.39E+04
6.20E+04
f4
Mean
5.00E+02
6.12E+03
3.46E+03
5.49E+02
6.39E+02
1.78E+03
7.29E+02
3.02E+03
5.08E+02
5.14E+02
SD
2.78E+01
1.24E+03
1.15E+03
2.56E+01
1.18E+02
7.37E+02
7.13E+01
9.93E+02
2.58E+01
2.78E+01
f5
Mean
6.15E+02
8.79E+02
8.53E+02
7.00E+02
7.32E+02
8.05E+02
8.26E+02
8.26E+02
8.20E+02
8.12E+02
SD
2.92E+01
2.11E+01
2.63E+01
5.54E+01
6.57E+01
3.78E+01
3.01E+01
2.12E+01
5.21E+01
6.58E+01
f6
Mean
6.13E+02
6.80E+02
6.71E+02
6.33E+02
6.36E+02
6.71E+02
6.68E+02
6.58E+02
6.73E+02
6.77E+02
SD
6.73E+00
6.29E+00
6.67E+00
1.45E+01
8.44E+00
7.70E+00
1.05E+01
6.32E+00
9.04E+00
1.50E+01
f7
Mean
9.61E+02
1.27E+03
1.24E+03
9.69E+02
1.06E+03
1.26E+03
1.24E+03
1.22E+03
1.87E+03
1.78E+03
SD
7.97E+01
4.96E+01
4.69E+01
6.49E+01