Content uploaded by Ali Asghar Heidari

Author content

All content in this area was uploaded by Ali Asghar Heidari on Sep 24, 2021

Content may be subject to copyright.

1

RUN beyond the Metaphor:

An Efficient Optimization Algorithm Based on

Runge Kutta Method

Iman Ahmadianfara*, Ali Asghar Heidarib,c, Amir H. Gandomid , Xuefeng Chue, Huiling Chenf

a Department of Civil Engineering, Behbahan Khatam Alanbia University of Technology, Behbahan, Iran

Email: i.ahmadianfar@bkatu.ac.ir, im.ahmadian@gmail.com

b School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran, Tehran 1439957131,

Iran.

Email: as_heidari@ut.ac.ir, aliasghar68@gmail.com

c Department of Computer Science, School of Computing, National University of Singapore, Singapore 117417,

Singapore

Email: aliasgha@comp.nus.edu.sg, t0917038@u.nus.edu

d University of Technology Sydney, Ultimo, NSW 2007, Australia.

Email: gandomi@uts.edu.au

e Department of Civil & Environmental Engineering, North Dakota State University, Department 2470, Fargo, ND,

USA.

Email: xuefeng.chu@ndsu.edu

f College of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou, Zhejiang

325035, China

Email: chenhuiling.jlu@gmail.com

2

Abstract

The optimization field suffers from the metaphor-based "pseudo-novel" or "fancy"

optimizers. Most of these cliché methods mimic animals' searching trends and possess

a small contribution to the optimization process itself. Most of these cliché methods

suffer from the locally efficient performance, biased verification methods on easy

problems, and high similarity between their components' interactions. This study

attempts to go beyond the traps of metaphors and introduce a novel metaphor-free

population-based optimization based on the mathematical foundations and ideas of the

Runge Kutta (RK) method widely well-known in mathematics. The proposed RUNge

Kutta optimizer (RUN) was developed to deal with various types of optimization

problems in the future. The RUN utilizes the logic of slope variations computed by the

RK method as a promising and logical searching mechanism for global optimization.

This search mechanism benefits from two active exploration and exploitation phases

for exploring the promising regions in the feature space and constructive movement

toward the global best solution. Furthermore, an enhanced solution quality (ESQ)

mechanism is employed to avoid the local optimal solutions and increase convergence

speed. The RUN algorithm's efficiency was evaluated by comparing with other

metaheuristic algorithms in 50 mathematical test functions and four real-world

engineering problems. The RUN provided very promising and competitive results,

showing superior exploration and exploitation tendencies, fast convergence rate, and

local optima avoidance. In optimizing the constrained engineering problems, the

metaphor-free RUN demonstrated its suitable performance as well. The authors invite

the community for extensive evaluations of this deep-rooted optimizer as a promising

tool for real-world optimization. The source codes, supplementary materials, and

guidance for the developed method will be publicly available at different hubs at and

http://imanahmadianfar.com, http://aliasgharheidari.com/RUN.html, and

http://mdm.wzu.edu.cn/RUN.html.

Keywords: Genetic algorithms; Evolutionary algorithm; Runge Kutta optimization;

Optimization; Swarm intelligence; Performance.

1. Introduction

Most real-world problems are complicated and present difficulties in being

optimized. These problems are often characterized by nonlinearity, multimodality, non-

differentiability, and high dimensionality. Because of these properties, the conventional

gradient-based optimization methods, such as quasi-Newton, conjugate gradient, and

sequential quadratic programming methods, are unable to optimize such problems

virtually (Nocedal & Wright, 2006; Wu, 2016). Therefore, existing literature suggests

that other optimization techniques need to be developed for more efficient and

3

effective optimization. An optimization problem can be in many-objective forms (Cao,

Dong, et al., 2020; Cao, Wang, et al., 2020). One another problem can be multi-

objective (Cao, Zhao, Yang, et al., 2019), memetic (Fu, et al., 2020), fuzzy (Chen, Qiao,

et al., 2019), robust (Qu, et al., 2020), large scale (Cao, Fan, et al., 2020; Cao, Zhao, et

al., 2020), and single-objective. Real-world problems are faced every day, and we need

to develop solvers for deep learning applications (Chen, Chen, et al., 2020; Li, et al.,

2019; Qiu, et al., 2019), decision-making procedures (Liu, et al., 2016; Liu, et al.; Wu, et

al., 2020), optimal resource allocation (Yan, et al., 2020), image improvement

optimization (Wang, et al., 2020), deployment optimization in networks (Cao, Zhao,

Gu, et al., 2019), water-energy optimization (Chen, et al., 2017), training systems and

methods in artificial neural networks (Mousavi, et al., 2020), and optimization of the

parameters (Zhang, et al., 2006). Numerous metaheuristic optimization algorithms

(MOAs) have been developed and widely employed as suitable alternative optimizers to

solve various problems due to their flexibility and straightforward implementation

procedure (Chen, Fan, et al., 2020; Yang & Chen, 2019). MOAs can be categorized into

three groups (Kaveh & Bakhshpoori, 2016): evolutionary algorithms (EAs), physics-

based algorithms (PBAs), and swarm-based algorithms (SBAs). Nevertheless, they

present some drawbacks, including high sensitivity and their control parameter settings.

Also, they do not always converge toward the globally optimal solution (Wu, et al.,

2015). As they utilize some randomly generated components within the procedure (Sun,

et al., 2019), an appropriate balance between exploration and exploitation cannot be

ensured. This limit is one of the fundamental challenges within all kinds of methods in

this area.

The methods under the class of EAs are based on the principles of evolution in

nature, such as selection, recombination, and mutation. The genetic algorithm (GA),

another widely-used EA, was inspired by Darwin's theory of evolution (Holland, 1975).

Other EAs include genetic programming (GP) (Koza, 1994), differential evolution

(DE) (Storn & Price, 1995), and evolution strategy (Beyer & Schwefel, 2002). The

methods in this category have the deepest roots in their foundation theory compared to

other approaches, as Darwin's theory reshaped our vision of the tree of life. Later, the

development of physics-based algorithms (PBAs) emerged as a trend in the field

inspired by physics laws governing the surrounding world. For instance, among these

emerging PBA algorithms, simulated annealing (SA) is the most popular one

(Kirkpatrick, et al., 1983). Other PBAs include gravitational search algorithm (GSA)

(Rashedi, et al., 2009), central force optimization (Formato, 2007), differential search

(DS) (Liu, et al., 2015), vortex search algorithm (VSA) , and

gradient-based optimizer (GBO) (Ahmadianfar, Bozorg-Haddad, et al., 2020).

Researchers tried to simulate organisms' cooperative behaviors in flocks after years

passed, which are natural or artificial . For example, the

main inspiration in particle swarm optimization (PSO) (Eberhart & Kennedy, 1995) is a

flock of birds' social behaviors. Other SBA examples include the Bat algorithm (BA)

(Yang, 2010b), cuckoo search (CS) (Gandomi, et al., 2013), ant colony optimization

4

(ACO) (Dorigo & Di Caro, 1999), artificial bee colony (ABC) (Karaboga & Basturk,

2007), firefly algorithm (FA) (Yang, 2010a), slime mould algorithm (SMA)

1

(Li, et al.,

2020), and Harris hawks optimization (HHO)

2

(Heidari, Mirjalili, et al., 2019).

On the other hand, evolution served as the core idea of swarm-based methods

that evolved the algorithms themselves. Two large influences of these evolving and

algorithms included the searching trend for an "unused" biologic source of inspiration

and utilizing it as a dress for a set of equations. These unwanted ambiguous directions

first occurred when the black hole optimizer appeared as a modified PSO with a new

dress (Piotrowski, et al., 2014). Later, another issue was raised by a team of researchers

in China, who proved that the widespread grey wolf optimizer (GWO) has a defect,

and there is a problem in the verification process (Niu, et al., 2019). It is also exposed

that there is no novelty in GWO, and its structure resembled some variants of PSO

with a metaphor (Camacho Villalón, et al., 2020). This method's metaphor is not

implemented, as mentioned in the original work (Camacho Villalón, et al., 2020). Such

inaccuracy affects the reliability of methods and questions the validity of metaphor-

based methods like GWO and the black hole algorithm. Despite the weaknesses,

metaphors, and structural differences of various optimization algorithms (Tzanetos &

Dounias, 2020), they all employ two typical phases, exploration and exploitation, to

search the solution space regions (Salcedo-Sanz, 2016). Exploring is an optimization

algorithm's ability to sincerely search the entire solution space and explore the

promising areas. At the same time, exploitation is the capability of an optimization

algorithm to search around near-optimal solutions. Generally, the exploration phase of

an optimizer should randomly produce solutions in various regions of the solution

space during early iterations of the optimization process (Heidari, Aljarah, et al., 2019).

In contrast, the exploitation phase of an optimization algorithm should create a robust

local search. Thus, a well-designed idea should be able of creating a suitable balance

between the exploration and exploitation phases.

Generally, creating an appropriate trade-off between exploration and

exploitation is an essential task for any optimization algorithm (Ahmadianfar,

Kheyrandish, et al., 2020). In this regard, many researchers have attempted to improve

the optimizers' performance by selecting appropriate control parameters or hybridizing

with other optimizers (Abdel-Baset, et al., 2019; Ahmadianfar, et al., 2019; Luo, et al.,

2017; Zhang, et al., 2018). Nevertheless, creating a robust algorithm that can balance

exploration and exploitation is a complex and challenging issue. Moreover, as there are

many real-world problems, more accurate and more consistent optimizers are needed.

To fill such a gap, a well-designed population-based optimization procedure is

proposed in this research. The proposed algorithm, Runge Kutta optimizer (RUN), was

designed according to the foundations of the Runge Kutta method

3

(Kutta, 1901;

1

https://aliasgharheidari.com/SMA.html

2

https://aliasgharheidari.com/HHO.html

3

For a better presentation of the term, we use the term Runge Kutta in this paper

5

Runge, 1895). RUN uses a specific slope calculation concept based on the Runge Kutta

method as an effective search engine for global optimization. The proposed algorithm

consists of two main parts: a search mechanism based on the Runge Kutta method and

an enhanced solution quality (ESQ) mechanism to increase solutions' quality. RUN's

performance was evaluated by using 50 mathematical test functions, and the results

were compared with those of other state-of-the-art optimizers. Furthermore, the

proposed RUN was employed to solve four engineering design problems to test its

ability and efficiency in solving a number of real-world optimization problems.

This paper is organized as follows. Section 2 presents a summarized review of

the Runge Kutta method. Section 3 provides the mathematical formulation and

optimization procedures of the RUN algorithm. Section 4 evaluates the efficiency of

the RUN to optimize different benchmark test functions. Section 5 assesses the ability

of the proposed RUN in solving engineering design problems. Section 6 presents the

main conclusions and some useful suggestions for future studies.

2. Related works

Generally, stochastic optimization algorithms can be categorized into two

classes: single-based and population-based algorithms. The algorithm begins the

optimization procedure with a single random position in the first class and updates it

during each iteration (Mirjalili, et al., 2016). Simulated annealing (SA) (Kirkpatrick, et

al., 1983), tabu search (TS) (Glover & Laguna, 1998), and hill-climbing (HC)

(Tsamardinos, et al., 2006) belong to this class. The primary benefits of the single-based

optimizers include easy implementation and a low number of function evaluations,

while their main drawback is the high possibility of getting caught up in local solutions.

In contrast, the population-based methods start the optimization procedure with a set

of random solutions and update their positions at each iteration. The well-known GA,

PSO, DE, ACO, ABC, and biogeography-based optimization (BBO) (Simon, 2008)

belong to this category. Population-based optimization algorithms also have a relatively

acceptable ability to avoid the local optimal solutions because they employ a set of

solutions at each iteration instead of only evolving on a single agent.

Accordingly, the population-based algorithms can handle the sceneries of

feature space and increase the convergence speed. Furthermore, they can share

information between solutions, making a more convenient search in complex and

challenging feature spaces (Mirjalili, et al., 2016). Notwithstanding these advantages,

these optimizers require many function evaluations during the optimization process and

a relatively complicated/difficult implementation. Another unavoidable issue is that

these methods apply a random-based vision for understanding the problem's

topographies, making them unbalanced, inaccurate, or unsuccessful in finding any best

solution. However, sometimes a locally-accurate solution can satisfy the practitioners

and requirements of real-world problems. Many studies indicate that the population-

based optimizers are regarded as more reliable and accurate than the single-based

6

algorithms because of the advantages mentioned above. Their applications in a broad

range of fields have demonstrated their worthiness and high capability. Generally, these

optimization algorithms have been largely inspired physics's laws, social behaviors of

creatures, and natural phenomena.

Of pertinent mention, a study by Sörensen on the low-quality contributions in

the optimization methods opened the eyes of many researchers (Sörensen, 2015). As

per this research, shallow mathematic models supplied with metaphor-based outfits

must be avoided to make improvements in the field (Lones, 2020). These metaphors

are often perplexing and irrelevant to experts, decision-makers, algorithm designers,

and those who utilize these methods for real-world cases. It has also been discovered

that some methods, such as popular harmony search, are not very original, in which the

-evolutionary search (Saka, et al., 2016).

Regardless of these shortcomings, optimization algorithms consist of exploration and

exploitation phases, as previously mentioned. Since establishing a reasonable balance

between these two phases is a challenge for any optimization technique, designing a

powerful and accurate optimization algorithm to achieve this goal is necessary. Hence, a

novel population-based metaheuristic optimization algorithm based on the Runge

Kutta method was developed in this study. The following two sections focus on the

formulation of this new RUN algorithm.

3. Overview of Runge Kutta method in differential equations

The Runge Kutta method (RKM) is broadly used to solve ordinary differential

equations (Kutta, 1901; Runge, 1895). RKM can be applied to create a high-precision

numerical method by using functions without requiring their high-order derivatives

(Zheng & Zhang, 2017). The primary formulation of the RKM is described as follows.

Consider the following first-order ordinary differential equation for an initial

value problem:

(1)

In RKM, the main idea is to define as the slope (S) of the best straight

line fitted to the graph at the point . Using the slope at point , another

point can be obtained by using the best fitted straight line:

, where . Similarly, . This

process can be repeated m times, which yields an approximate solution in the range of

[, ].

The derivation of RKM is based on the Taylor series, which is given by:

(2)

7

By dropping the higher-order terms, the following approximate equation can be

obtained.

(3)

According to Eq. (3), the formula for the first-order Runge Kutta method (or

Euler method) can be expressed as:

(4)

where ; and - .

The first-order derivative () can be approximated by using the following

central differencing formula (Patil & Verma, 2006):

(5)

Thus, the rule in Eq. (4) can be rewritten as:

(6)

In this study, the fourth-order Runge Kutta (RK4) (England, 1969) derived

from Eq. (2) was used to develop the proposed optimization method. The formula for

the RK4 method, which is based on the weighted average of four increments (as shown

in Fig. 1), can be expressed as:

(7)

in which the four weighted factors (k1, k2, k3, and k4) are respectively given by:

(8)

where is the first increment and determines the slope at the beginning of the interval

[, ] using . is the second increment and specifies based on the slope at the

8

midpoint, using and ; is the third increment and defines regarding the slope at

the midpoint, using and ; and is the fourth increment and is determined based

on the slope at the end of the interval, using and . According to RK4, the next

value is specified by the current value plus the weighted average of

four increments.

Fig. 1. Slopes utilized in the RK method

4. Introduction to the Runge Kutta optimizer

In this study, a new swarm-based model with stochastic components is

developed for optimization purposes. This model eliminates the cliché inspiration

attachment with itself the proposed RUN method is represented by using a metaphor-

free language with emphasis on the mathematical core as some sets of activated rules at

the proper time. Using metaphors in a population-based model is rejected since the

only benefit of such a way is to hide the real nature of the equations utilized within the

optimizers. Therefore, RUN accounts for the main logic of the RK technique and the

population-based evolution of a crowd of agents. In fact, the RK uses a specific

formulation (i.e., RK4 method) to calculate the slope and solve the ordinary differential

equations (Kutta, 1901; Runge, 1895). RUN's main idea is based on the concept of the

proposed calculated slope in the RK method. The RUN uses the calculated slope as a

searching logic to explore the promising area in the search space and build a set of rules

for the evolution of a population set according to the swarm-based optimization

algorithms logic. The mathematical formulation of RUN is detailed in the following

subsections.

4.1. Initialization step

𝑥

𝑥𝑥

𝑥𝑥

𝑦

𝑦𝑥𝑥

𝑒𝑥𝑎𝑐𝑡

𝑘

𝑘

𝑘

𝑘

1

𝑦𝑥𝑥

𝑅𝑢𝑛𝑔𝑒𝐾𝑢𝑡𝑡𝑎

error

𝑘𝑘𝑘𝑘

9

In this step, the logic is to set an initial swarm to be evolved within the allowed

number of iterations. In RUN, N positions are randomly generated for a population

with a size of N. Each member of the population, (), is a solution

with a dimension of D for an optimization problem. In general, the initial positions are

randomly created by the following idea:

(9)

where and are the lower and upper bounds of the th variable of the problem

(), and is a random number in the range of [0, 1]. This rule simply

generates some solutions within limits.

4.2. Root of search mechanism

The power of any optimizer is dependent on its iterative cores for generating

the exploration and exploitation patterns. In the exploration core, an optimization

algorithm uses a set of random solutions with a high randomness rate to explore the

promising areas of the feasible space. Small and gradual variations in the exploitation of

core solutions and random behaviors are remarkably lower than those in the

exploration mechanism (Mirjalili, 2015a). In this study, RUN's leading search

mechanism is based on the RK method to search the decision space using a set of

random solutions and implement a proper global and local search.

The RK4 method was employed to determine the search mechanism in the

proposed RUN. The first-order derivative was utilized to define the coefficient ,

which is calculated by Eq. (5). Moreover, the proposed optimization algorithm uses

position instead of its fitness (), because applying the objective function of a

position needs considerable time in computing. According to Eq. (5), and

are two neighboring positions of . By considering as a minimization

problem, positions and have best and worst positions, respectively.

Therefore, to create a population-based algorithm, position is equal to (i.e.,

is the best position around ), while the position is equal to (i.e., is

the worst position around ). Therefore, is defined as:

wb

12

xx

kx

(10)

where and are the worst and best solutions obtained at each iteration, which are

determined based on the three random solutions selected from the members of the

population (, and .

In order to enhance the exploration search and create a randomness behavior,

Eq. (10) can be rewritten as follows:

(10-1)

10

(10-2)

where is a random number in the range of [0, 1]. Overall, the best solution ()

plays a crucial role in finding promising areas and moving toward the global best

solution. Therefore, in this study, a random parameter () is used to increase the

importance of the best solution () during the optimization process. In Eq. (10),

can be specified by:

(11-1)

(11-2)

(11-3)

where is the position increment, which depends on parameter . is the step

size determined by the difference between and . Parameter is a scale factor

determined by the solution space's size, decreasing exponentially during the

optimization process. is the average all solutions at each iteration. Using the

random numbers () in Eqs. (11-1) to (11-3), the method can produce more

diversification trends and find various search space areas.

Accordingly, the three other coefficients (i.e., , , and ) can be

respectively written as:

(12)

(13)

(14)

where and are two random numbers in the range of [0, 1]. In this study,

and are determined by the following:

(15)

where is the best random solution, which is selected from the three random

solutions (, , and ). According to Eq. (15), if the fitness of the current

11

solution () is better than that of , the best and worst solutions ( and ) are

equal to and , respectively. Otherwise, they are equal to and , respectively.

Therefore, the leading search mechanism in RUN can be defined as:

(16)

in which

(16-1)

4.3. Updating solutions

The RUN algorithm begins the optimization process with a set of random

individuals (solutions). At each iteration, solutions update their positions using the RK

method. To do this, RUN uses a solution and the search mechanism obtained by the

RK method. Figure 2 depicts how a position updates its position by using the RK

method. In this study, to provide the global (exploration) and local (exploitation)

search, the following scheme is implemented to create the position at the next iteration:

in which

where is a random number, is a random number with a normal distribution.

k2

k3

k4

xn+1

xn

k1

Feasible space

Variable 1

Variable 2

Fig. 2. Slopes employed by the RK to obtain the next position () in the RUN

algorithm

(17)

(exploration phase)

(exploitation phase)

12

The formulas of and are expressed as

(17-1)

(17-2)

and can be calculated as follows:

(17-3)

(17-4)

where is a random number in the range of (0,1). is the best-so-far solution.

is the best position obtained at each iteration. is an adaptive factor, which is

given by:

(17-5)

in which

(17-6)

where and are two constant numbers. is the number of iterations. is the

maximum number of iterations. In this study, was employed to provide a suitable

balance between exploration and exploitation. Based on Eq. (17-5), a large value of SF

is specified in the early iterations to increase the diversity and enhance the exploration

search; then, its value reduces to promote the exploitation search capability by

increasing the number of iterations. The main control parameters of RUN include two

parameters employed in the (), which are a and b.

The rule in Eq. (17) shows that the proposed RUN selects the exploration and

exploitation phases based on the condition < 0.5. This novel procedure used for

optimization in RUN ensures that if , a global search is applied in the

solution space and a local search around solution is performed simultaneously. By

implementing a novel global search (exploration), the RUN can explore the search

space's superior promising regions. On the other hand, if , RUN uses a

local search around solution . By applying this local search phase, the proposed

algorithm can effectively increase the convergence speed and focus on high-quality

solutions.

To perform the local search around the solutions and and explore the

promising regions in the search space, Eq. (17) is rewritten as follows:

13

where is an integer number, which is 1 or -1. This parameter changes the search

direction and increases diversity. is a random number in the range [0, 2]. According

to Eq. (18), the local search around decreases as the number of iterations increases.

Fig. 3 displays the search mechanism of RUN, indicating how to generate position

at the next iteration.

k1

k3

k4

xc+1/6 (xRK )Δx

xc

Feasible space

Variable 1

Variable 2

µ .(xm-xc)

xn+1

xm

k2

Fig. 3. Search mechanism of the RUN

4.4. Enhanced solution quality

In the RUN algorithm, enhanced solution quality (ESQ) is employed to

increase the quality of solutions and avoid local optima in each iteration. By applying

(18)

(exploration phase)

(exploitation phase)

14

ESQ, the RUN algorithm ensures that each solution moves toward a better position. In

the proposed ESQ, the average of three random solutions () is calculated and

combined with the best position () to generate a new solution (). The following

scheme is executed to create the solution () by using the ESQ:

(19

)

in which

(19-1)

(19-2)

where is a random number in the range of [0, 1]. is a random number, which is

equal to 5 in this study. is a random number, which decreases with the

increasing number of iterations. is an integer number, which is 1, 0, or -1. is the

best solution explored so far. According to the above scheme, for (i.e., the later

iterations), solution trends to create an exploitation search, while for (i.e.,

the early iterations), solution trends to make an exploration search. Note that in

the latter condition, to increase the diversity, parameter is defined. It is noteworthy

that ESQ is applied when the condition is met.

The solution calculated in this part () may not have better fitness than

that of the current solution (i.e., ). To have another chance for

creating a good solution, another new solution () is generated, which is defined as

follows:

if rand<

(20)

end

where is a random number with a value of . In fact, the new solution

() is implemented when the condition rand< is met. The main objective of Eq.

(20) is to move the solution towards a better position. In the first rule of this

equation, a local search around is generated, and in the second rule, RUN

15

attempts to explore the promising regions with the movement towards the best

solution. Hence, to emphasize the importance of the best solution, coefficient is

used. It should be noted that to calculate , solutions and become and

, respectively, because the fitness value of is less than that of

(). The pseudo-code of and flowchart of RUN are presented in

Algorithm 1. and Fig. 4, respectively.

Algorithm 1. The pseudo-code of RUN

Stage 1. Initialization

Initialize,

Generate the RUN population

Calculate the objective function of each member of population

Determine the solutions , , and

Stage 2. RUN operators

for i= 1: Maxi

for n = 1 : N

for l = 1 : D

Calculate position using Eq. 18

end for

Enhance the solution quality

if

Calculate position using Eq. 19

if

if rand<

Calculate position using Eq. 20

end

end

end

Update positions and

end for

Update position

i=i+1

end

Stage 3. return

16

Fig. 4. Flowchart of the RUN algorithm

As shown in Fig. 5, three paths are considered for optimization in RUN. The

proposed algorithm first uses the RK search mechanism to generate position and

then employs the ESQ mechanism to explore the promising regions in the search

space. According to this mechanism, RUN follows three paths to reach a better

solution. In the first and second paths, position calculated by the ESQ is

compared with the position . If the fitness of is worse than that of

(i.e.,), another position () is generated. If

, the best solution is (second path). Otherwise, it is (first path). In

the third path, if , the best solution is .

The following characteristics theoretically demonstrate the proficiency of RUN

in solving various complex optimization problems:

Scale factor () has a randomized adaptation nature, which assists RUN in

further improving the exploration and exploitation steps. This parameter

ensures a smooth transition from exploration to exploitation.

Using the average position of solutions can promote RUN's exploration

tendency in the early iterations.

RUN employs a search mechanism based on the RK method to boost both

exploration and exploitation abilities.

The enhanced solution quality (ESQ) in the RUN algorithm utilizes the thus-far

best solution to promote the quality of solutions and improve the convergence

speed.

In the RUN algorithm, it is possible that if the new solution is not in a better

position than the current solution, it can identify a new different position in the

search space to reach a better position. This process can enhance the quality of

solutions and improve the convergence rate.

The search mechanism and ESQ use two randomized variables to emphasize

the importance of the best solution and move toward the global best solution,

which can effectively balance the exploration and exploitation steps.

17

Fig. 5. Optimization process in the RUN

4.5. Computational complexity

RUN algorithm mainly includes the following parts: initialization, getting the

maximum and minimum fitness, getting the minimum in three random individuals,

exploration of the search space, parameter updating, and fitness evaluation. Among

them, indicates the number of individuals in the population, is the problem's

dimension, and indicates the maximum number of iterations. The

computational complexity of initialization, fitness evaluation, parameter updating, and

exploration of the search space is , getting the minimum in three random

individuals is and the getting the maximum and minimum fitness is .

From this, we can get the complexity of the whole algorithm:

.

5. Results and discussion

The new RUN algorithm's ability was verified using 20 benchmark functions,

which have been used by many researchers (Ahmadianfar, et al., 2019; Huang, et al.,

2019; Tian & Gao, 2017; Zhao, et al., 2019). The set of benchmark problems employed

in this study involves three families of mathematical functions: unimodal functions

(UFs) (f1-f6), multimodal functions (MFs) (f7-f14), and hybrid functions (HFs) (f15-f20) The

details on these test functions are shown in Tables 1-3.

18

Table 1. Unimodal test functions.

Function

D

Range

fmin

30

[-100, 100]

0

30

[-100, 100]

0

30

[-100, 100]

0

30

[-100, 100]

0

30

[-100, 100]

0

30

[-100, 100]

0

Table 2. Multimodal test functions.

Function

D

Range

fmin

30

[-100, 100]

0

where

30

[-100, 100]

0

30

[-100, 100]

0

30

[-32, 32]

0

30

[-100, 100]

0

19

30

[-100, 100]

0

30

[-600, 600]

0

30

[-50, 50]

0

Table 3. Hybrid benchmark functions

fmin

Search space

D

Name

Test function

1700

[-100, 100]

30

HF 1 (N=3)

1800

[-100, 100]

30

HF 2 (N=3)

1900

[-100, 100]

30

HF 3 (N=4)

2000

[-100, 100]

30

HF 4 (N=4)

2100

[-100, 100]

30

HF 5 (N=5)

2200

[-100, 100]

30

HF 6 (N=5)

The unimodal test functions with a global best position can evaluate different

optimization algorithms' exploitative behavior, while the multimodal test functions can

assess their exploration and local optima avoidance capabilities. It should be noted that

the hybrid test functions are more challenging and complicated than the unimodal and

multimodal test functions (Ahmadianfar, Bozorg-Haddad, et al., 2020). Therefore, they

are incredibly suitable to validate the optimizers' ability to solve complicated real-world

optimization problems. The proposed RUN results and efficiency were compared with

those of other well-known algorithms, including the GWO (Mirjalili, et al., 2014),

WOA(Mirjalili & Lewis, 2016), WCA (Eskandar, et al., 2012), IWO (Hosseini, 2007),

and CS (Yang & Deb, 2010) algorithms, based on the average and standard deviation of

the results. The GWO and IWO were included in the comparisons, as these widely-

used methods are two examples of the metaphor-based optimizers (Camacho Villalón,

et al., 2020). Six different test functions were selected to assess the effects of the RUN

algorithm qualitatively. Figure 6 depicts the qualitative results of test functions f1, f2,

f4, f7, f10, and f12. RUN was employed for minimizing these functions by using five

solutions over 200 iterations.

5.1. Experimental setup

20

The population size and the total number of iterations were set respectively

equal to 50 and 500 for the UFs and MFs and 50 and 1000 for the HFs. All results were

presented and compared in terms of the optimization algorithms' average efficiencies

over 30 independent runs. For GWO, WOA, CS, IWO, and WCA, the control

parameters were the same as those suggested in the original work. Table 4 lists the

parameters used in this study

Table 4. Parameter settings of optimization algorithms

Optimizers

Parameters

RUN

a = 20 and b = 12

GWO

WOA

CS

Rate of discovery = 0.25

WCA

number of rivers + sea (Nsr) = 10

a controlling parameter (dmax) = 0.1

IWO

maximum number of seeds (Smax) = 15

minimum number of seeds (Smin) = 0

initial value of standard deviation = 5

final value of standard deviation = 0.01

5.2. Qualitative results of RUN

Three well-known qualitative metrics used to demonstrate RUN's performance

were search history, trajectory graph, and convergence curve. The search history graph

discloses the history of the RUN algorithm's positions during the optimization process.

The trajectory curve displays how the first dimension of a solution changed during the

iterations. The convergence curve demonstrates how the fitness value of the best

solution changed during the optimization process.

Figure 6 shows that RUN yielded a similar pattern to solve different problems

regarding the history of positions. This indicates that an attempt was made to initially

increase the exploration and find the promising regions of the search space and then

exploit the neighborhood of the best solutions. From the trajectory curves in Fig. 6, it

can be observed that RUN began the searching process with sudden fluctuations,

which involved about 100% of the search space. This behavior reveals the exploration

tendency of the RUN algorithm. As the number of iterations increased, the amplitude

of these variations reduced. This procedure ensured the transition of RUN from the

exploratory search towards exploitative trends. Therefore, it is concluded from the

trajectory graphs that the RUN algorithm first provided the exploration trend and then

shifted to the exploitation stage.

The convergence graph is usually employed to assess the convergence

performance of optimizers. Fig. 6 displays an accelerated reducing pattern in all

convergence curves, especially in the early iterations. It also shows the approximate

21

timing when RUN transferred from the exploration to the exploitation phase. These

results demonstrate the suitable accelerated convergence behavior of RUN.

5.3. Assessment of the exploitative behavior

Typically, UFs are used to test the exploitability of the optimization algorithms.

Since UFs (f1-f6) have only one global best solution, they can be used to evaluate the

exploitation ability of the optimization algorithms. Table 5 shows the results of the

RUN, GWO, WOA, CS, IWO, and WCA algorithms for the UFs, including the

average, best, and standard deviation values of the fitness function for 30 different

runs. The comparisons of RUN with the five other meta-heuristic optimization

algorithms demonstrated that RUN was the best optimizer to solve the UFs and

provide competitive results. Particularly, the proposed RUN algorithm exhibited an

excellent exploitation behavior.

5.4. Assessment of the exploratory behavior

The multimodal functions (f7-f14) were used to validate all optimizers'

exploratory behaviors since they had many local optimal solutions. Table 5 shows the

results of MFs obtained by the RUN, GWO, WOA, CS, IWO, and WCA algorithms,

indicating the superior performance of RUN to the other optimizers, except for f11. For

function f11, RUN was inferior to the WOA algorithm and superior to GWO and WCA.

The results presented in Table 5 for test functions f7-f14 demonstrate that RUN also has

a superior exploration ability due to the use of the exploration mechanism that ensures

the search process towards the global best solution.

5.5. Ability to avoid local optima

The RUN's ability to avoid the local optima was evaluated by using hybrid

functions (f15 - f20). These test functions are regarded as the most complicated

benchmark test functions, and only an optimizer with an appropriate balance between

global and local optima can avoid the local solutions. Table 6 presents the results of

RUN and the five other optimizers on the HFs.

For the results of the HFs in Table 6, it can be clearly observed that RUN was

the best optimizer among the six optimization algorithms on functions f15- f19 according

to their average fitness values. For function f20, RUN was surpassed by GWO but

superior to the WOA, CS, IWO, and WCA algorithms. Indeed, the proposed optimizer

was the second-best effective optimizer for this test functions. This capability is due to

the adaptive mechanism employed to update the parameter and the ESQ

mechanism in the proposed RUN, which assures a good transition from exploration to

exploitation.

22

Search history

2D

Trajectory

Convergence

Fig. 6. Qualitative results of six benchmark test functions

f

2

f

12

f

4

f

7

f

10

f

1

23

Table 5. Results of the UFs and MFs from RUN and five other meta-heuristic optimization algorithms

Optimizer

UFs

f1

f2

f3

f4

f5

f6

RUN

Average

1.75E-132

6.68E-267

2.16E-129

2.45E+01

1.26E-137

2.35E-130

Best

5.31E-145

3.55E-278

1.81E-145

2.29E+01

6.74E-147

1.20E-145

SD

9.04E-132

0.00E+00

1.18E-128

1.04E+00

5.31E-137

1.29E-129

GWO

Average

3.87E-27

4.17E-97

5.78E-29

2.68E+01

5.60E-33

5.14E-30

Best

4.33E-29

2.8E-108

2.25E-31

2.52E+01

1.61E-34

1.12E-31

SD

7.73E-27

1.87E-96

1.48E-28

7.53E-01

5.84E-33

8.14E-30

CS

Average

2.52E-02

1.81E+01

9.00E-01

1.39E+02

5.16E-04

1.88E-01

Best

4.44E-05

1.46E-06

5.38E-03

2.96E+01

6.67E-06

1.22E-02

SD

1.17E-01

8.44E+01

1.70E+00

2.37E+02

7.63E-04

3.04E-01

WCA

Average

2.31E-05

6.77E-07

5.02E-09

7.38E+01

6.27E-07

2.86E+03

Best

2.22E-07

4.05E-25

1.11E-10

8.80E-01

3.13E-12

7.39E-08

SD

7.01E-05

3.70E-06

9.07E-09

6.54E+01

3.00E-06

7.78E+03

WOA

Average

6.75E-80

1.56E-110

5.52E+03

2.75E+01

2.86E-84

1.30E-81

Best

9.43E-89

9.17E-141

2.88E+01

2.69E+01

2.63E-94

2.90E-89

SD

2.45E-79

7.86E-110

3.85E+03

4.12E-01

1.11E-83

5.59E-81

IWO

Average

3.18E+03

1.53E+03

4.24E+02

4.10E+04

5.69E+04

5.01E+06

Best

8.84E+01

1.06E-05

6.12E-05

2.37E+01

4.21E+04

1.25E+06

SD

3.14E+03

1.96E+03

6.40E+02

9.02E+04

1.23E+04

2.57E+06

MFs

f7

f8

f9

f10

f11

f12

f13

f14

RUN

Average

0.00E+00

2.04E-01

3.82E-04

8.88E-16

1.04E-13

3.42E-01

0.00E+00

6.59E-08

Best

0.00E+00

4.21E-07

3.82E-04

8.88E-16

6.39E-14

2.33E-01

0.00E+00

3.33E-08

SD

0.00E+00

1.13E-01

0.00E+00

0.00E+00

1.63E-14

7.53E-02

0.00E+00

1.95E-08

GWO

Average

5.91E+00

1.01E+00

3.82E-04

4.46E-14

2.91E+01

6.39E-01

6.13E-03

3.20E-02

Best

2.11E+00

6.36E-01

3.82E-04

3.64E-14

2.27E+01

4.41E-01

0.00E+00

6.40E-03

SD

2.20E+00

1.59E-01

8.72E-13

4.19E-15

3.34E+00

9.60E-02

1.20E-02

2.33E-02

CS

Average

9.86E+00

2.41E+00

4.12E-04

3.73E-03

6.23E-02

5.93E-01

1.47E-02

1.69E-01

Best

7.74E+00

6.28E-01

3.82E-04

4.69E-04

8.53E-14

4.42E-01

4.35E-10

5.29E-08

SD

8.36E-01

2.27E+00

4.54E-05

3.44E-03

9.52E-02

8.40E-02

1.80E-02

2.68E-01

WCA

Average

1.20E+01

2.92E+03

5.19E-03

3.40E+00

1.20E-01

5.30E-01

3.13E-02

3.64E-01

Best

1.03E+01

1.10E+03

3.82E-04

2.19E-02

8.53E-14

2.53E-01

5.08E-12

1.53E-12

SD

6.12E-01

1.36E+03

2.63E-02

2.28E+00

5.12E-01

1.50E-01

3.86E-02

7.04E-01

WOA

Average

3.00E+00

5.12E-01

3.82E-04

3.73E-15

1.92E-14

5.24E-01

3.05E-03

1.03E-02

Best

0.00E+00

6.99E-02

3.82E-04

8.88E-16

7.11E-15

2.60E-01

0.00E+00

1.30E-03

SD

4.43E+00

3.58E-01

5.55E-13

2.70E-15

6.62E-14

1.88E-01

1.67E-02

1.59E-02

IWO

Average

1.30E+01

4.59E+03

6.89E+02

1.24E+00

5.29E+00

3.58E-01

1.67E+02

1.27E-01

Best

1.21E+01

3.25E+03

3.90E-04

5.07E-03

3.00E+00

2.21E-01

9.25E+01

4.80E-02

SD

4.09E-01

6.11E+02

3.84E+02

4.71E+00

1.57E+00

8.82E-02

3.99E+01

8.93E-02

24

Table 6. Statistical results of the HFs from RUN and five other optimizers

Optimizer

HFs

f15

f16

f17

f18

f19

f20

RUN

Average

104191.21

3435.33

1919.53

3519.30

48127.89

2674.29

Best

26504.80

2149.82

1911.91

2345.66

10865.46

2229.29

SD

42897.96

801.49

5.01

2215.65

22065.81

227.33

GWO

Average

2017606.11

9419404.67

1945.42

23438.34

865855.49

2581.81

Best

243778.74

4056.06

1912.26

11065.71

66706.84

2250.33

SD

2197530.17

22146302.91

26.45

12065.16

1222558.84

145.41

CS

Average

1638591.37

8614.09

1931.73

94953.78

405641.76

3114.17

Best

168986.27

2070.91

1909.39

3577.19

16508.82

2364.87

SD

1608329.34

8165.00

30.62

309592.19

577986.74

364.57

WCA

Average

1096464.13

5561515.91

1927.69

24082.37

339962.26

2832.20

Best

177033

2413.67

1910.27

5378.61

23640.99

2579.874

SD

742290.81

30411215.42

29.31

15291.10

223453.44

136.44

WOA

Average

11178976.28

93612.11

1964.90

76381.26

3876550.62

3084.20

Best

2520022.97

9512.03

1919.07

28141.42

189834.25

2476.51

SD

7349962.08

94864.91

34.80

48244.50

4182086.86

252.11

IWO

Average

110385.61

5178.86

1922.03

30483.82

53137.20

3263.82

Best

15620.9

2229.473

1907.79

3739.462

11885.51

2729.88

SD

73296.20

3721.69

21.40

13771.33

31510.29

283.44

25

5.6. Assessment of the convergence ability

Notwithstanding, the results presented in Tables 5-6 demonstrate the RUN

algorithm's superior efficiency compared with the other optimizers. However, the

convergence behavior analysis must also be performed to further assess the proposed

RUN 's performance in solving optimization problems. The convergence curves of

RUN, GWO, WOA, CS, IWO, and WCA are depicted in Fig. 7, revealing the

relationships of the best-so-far fitness value explored (y-axis) and the number of

functional evaluations (NFE) (x-axis).

According to the convergence curves (Fig. 7), the following conclusions can be

obtained:

Fig. 7. Convergence graphs of the RUN and five other optimizers for the selected UFs, MFs, and HFs

26

Concerning the convergence rate, the IWO, WCA, and CS algorithms displayed

weak performances in optimizing the UFs and MFs, followed by the WOA and

GWO algorithms.

The RUN optimizer had a faster convergence curve than the other algorithms

for the unimodal and multimodal test functions due to the proper balance

between exploration and exploitation.

For the HFs, the convergence rate of RUN tended to be accelerated by

increasing the number of functional evaluations due to the ESQ and adaptive

mechanism, which helped it to explore the promising areas of the solution

space in the early iterations and more quickly converge towards the optimal

solution after spending about 15% of the total number of function evaluations.

The convergence curves revealed that RUN did provide a more suitable

convergence speed to optimize the test functions than the other optimizers.

5.7. Ranking analysis

The Friedman and Quade tests (Derrac, et al., 2011) were conducted to

determine the six optimizers' influential performances. These tests employ a

nonparametric two-way analysis of variance, which allows the comparison of several

samples. Based on the Friedman test, all samples are equal in terms of importance. In

contrast, the Quade test considers the fact that some samples are more difficult or

complicated than others and, thus, provides a weighted ranking analysis of the samples

(Derrac, et al., 2011).

Tables 7 and 8 show the Friedman and Quade test ranks, including the

individual, average, and final ranks for the average fitness values from RUN and the

five other optimizers on all UF, MF, and HF test functions. The Friedman and Quade

test results indicated that the RUN algorithm performed the best among the six

algorithms on all test functions.

Table 7. Friedman ranks for the UFs, MFs, and HFs for RUN and five other optimizers

Optimizers

UFs

Average

Rank

Rank

f1

f2

f3

f4

f5

f6

RUN

1

1

1

1

1

1

1.00

1

GWO

3

3

2

2

3

3

2.67

2

CS

5

6

4

6

5

4

5.00

5

WCA

4

4

5

4

4

5

4.33

4

WOA

2

2

6

3

2

2

2.83

3

IWO

6

5

3

5

6

6

5.17

6

MFs

f7

f8

f9

f10

f11

f12

f13

f14

RUN

1

1

2

1

2

1

1

1

1.25

1

GWO

3

3

2

3

6

6

3

3

3.63

4

CS

4

4

4

4

3

5

4

5

4.13

3

27

WCA

5

5

5

6

4

4

5

6

5.00

5

WOA

2

2

2

2

1

3

2

2

2.00

2

IWO

6

6

6

5

5

2

6

4

5.00

5

HFs

f15

f16

f17

f18

f19

f20

RUN

1

1

1

1

1

2

1.17

1

GWO

5

6

5

3

5

1

4.17

4

CS

4

3

4

6

4

5

4.33

5

WCA

3

5

3

2

3

3

3.17

3

WOA

6

4

6

5

6

4

5.17

6

IWO

2

2

2

4

2

6

3.00

2

Table 8. Quade ranks for the UFs, MFs, and HFs for RUN and five other optimizers

Optimizers

UFs

Average

Rank

Rank

f1

f2

f3

f4

f5

f6

RUN

5

1

2

6

3

4

1.00

1

GWO

10

2

8

12

4

6

2.67

2

CS

6

15

12

18

3

9

4.57

5

WCA

16

12

4

20

8

24

4.14

4

WOA

20

5

30

25

10

15

2.76

3

IWO

18

12

6

24

30

36

5.86

6

MFs

f7

f8

f9

f10

f11

f12

f13

f14

RUN

1.5

7

6

3

4

8

1.5

5

1.33

1

GWO

28

24

8

4

32

20

12

16

3.31

3

CS

24

21

3

6

12

18

9

15

3.94

4

WCA

35

40

5

30

15

25

10

20

4.97

5

WOA

16

12

6

2

4

14

8

10

1.89

2

IWO

30

48

42

18

24

12

36

6

5.56

6

HFs

f15

f16

f17

f18

f19

f20

RUN

6

3

1

4

5

2

1.10

1

GWO

25

30

5

15

20

10

4.57

5

CS

18

9

3

12

15

6

4.14

4

WCA

20

24

4

12

16

8

3.33

3

WOA

36

24

6

18

30

12

5.19

6

IWO

12

6

2

8

10

4

2.67

2

Table 9 displays the statistics and p-values of the Friedman and Quade tests for

all test functions. As per the p-values calculated for the two tests, significant differences

can be seen among all optimizers.

28

5.8. Comparison of RUN with advanced optimizers

In order to further evaluate the efficiency of RUN, it was compared with eight

advanced optimizers including CGSCA (Kumar, et al., 2017), SCADE (Nenavath &

Jatoth, 2018), BMWOA (Heidari, Aljarah, et al., 2019), BWOA (Chen, Xu, et al., 2019),

OBLGWO (Heidari, Abbaspour, et al., 2019), CAMES (Hansen, et al., 2003), GL25

(García-Martínez, et al., 2008), and CLPSO (Liang, et al., 2006) in solving the CEC-BC-

2017 benchmark functions. The population size, maximum number of iterations, and

dimension were set to 30, 500, and 30, respectively. All the optimization algorithms

were also performed in 30 different runs for each mathematical test function.

The best, average, and standard deviation of the results calculated by RUN and

the eight advanced optimizers are summarized in Table 10. As shown in Table 10,

RUN presented promising results on the CEC-BC-2017 functions compared with the

other optimizers. Moreover, the proposed RUN displayed the best performance in the

20 test functions (f2, f4, f5, f7, f8, f9, f10, f11, f13, f15-f24, and f26) and the second-best efficiency

in the remaining 10 test functions (f1, f3, f6, f12, f14, f25, and f27-f30). In this study, to

compute the average ranks of the optimization algorithms and specify their differences,

the Friedman test was performed. Table 11 displays the average ranks of all the

optimizers, where RUN achieved the best rank (1.33). Therefore, RUN had the best

efficiency compared with the eight advanced optimizers. To investigate the

convergence speed of RUN, the convergence curves were obtained for all the

optimizers on the CEC-BC-2017 functions (Fig. 8). It can be observed from Fig. 8 that

RUN achieved accurate solutions with a faster convergence rate than the eight

advanced optimizers.

Table 9. Statistic and p-value computed by the Friedman and Quade tests for the

UFs, MFs, and HFs

Average ranking

Quade

Friedman

UFs

10.3445

24.7619

Statistic

1.83e-05

1.55e-04

p-value

MFs

12.9663

28.3333

Statistic

3.61E-07

3.13e-05

p-value

HFs

5.0844

16.6667

Statistic

2.40E-03

5.20E-03

p-value

29

Table 10. Statistical results of the RUN and eight advanced optimizers on CEC-BC-2017

RUN

CGSCA

SCADE

BMWOA

BWOA

OBLGWO

CMAES

GL25

CLPSO

f1

Best

1.44E+04

1.53E+10

1.87E+10

5.20E+08

1.94E+09

4.44E+07

1.04E+02

6.83E+09

7.65E+09

Average

3.75E+04

2.51E+10

2.97E+10

1.10E+09

5.58E+09

1.57E+08

5.45E+03

1.69E+10

1.16E+10

SD

1.40E+04

5.37E+09

4.86E+09

3.73E+08

2.05E+09

8.59E+07

5.75E+03

5.28E+09

2.59E+09

Best

2.92E+14

9.54E+33

6.98E+34

6.58E+22

1.25E+27

2.68E+17

2.02E+10

2.93E+30

4.62E+32

f2

Average

4.17E+17

8.96E+38

1.13E+40

1.86E+30

4.23E+35

3.80E+22

2.59E+31

4.01E+38

1.29E+43

SD

1.15E+18

2.88E+39

3.27E+40

1.01E+31

1.58E+36

9.92E+22

1.42E+32

1.32E+39

7.05E+43

f3

Best

3.59E+04

5.40E+04

5.72E+04

5.00E+04

5.78E+04

3.27E+04

1.23E+05

1.22E+05

1.09E+05

Average

5.05E+04

7.16E+04

7.68E+04

7.99E+04

7.51E+04

4.97E+04

1.94E+05

1.72E+05

1.56E+05

SD

8.29E+03

1.03E+04

7.59E+03

1.03E+04

7.58E+03

8.31E+03

5.92E+04

3.46E+04

2.38E+04

f4

Best

4.71E+02

1.45E+03

4.93E+03

6.09E+02

8.77E+02

5.19E+02

5.02E+02

1.58E+03

1.97E+03

Average

5.13E+02

3.57E+03

6.99E+03

7.31E+02

1.41E+03

5.57E+02

9.98E+02

3.22E+03

3.08E+03

SD

1.81E+01

9.87E+02

1.29E+03

1.11E+02

3.98E+02

3.05E+01

3.64E+02

1.07E+03

8.66E+02

f5

Best

5.92E+02

7.79E+02

8.19E+02

7.10E+02

7.23E+02

6.10E+02

5.79E+02

7.44E+02

7.54E+02

Average

6.53E+02

8.52E+02

8.74E+02

8.09E+02

8.20E+02

6.84E+02

1.22E+03

8.46E+02

8.08E+02

SD

2.91E+01

3.21E+01

2.41E+01

4.46E+01

3.44E+01

5.05E+01

1.92E+02

3.96E+01

2.60E+01

f6

Best

6.23E+02

6.54E+02

6.58E+02

6.53E+02

6.55E+02

6.07E+02

6.74E+02

6.44E+02

6.41E+02

Average

6.40E+02

6.70E+02

6.74E+02

6.68E+02

6.74E+02

6.25E+02

6.97E+02

6.66E+02

6.58E+02

SD

8.22E+00

7.50E+00

9.15E+00

8.50E+00

9.29E+00

1.28E+01

1.30E+01

9.60E+00

7.81E+00

f7

Best

8.02E+02

1.16E+03

1.17E+03

1.10E+03

1.10E+03

8.78E+02

7.71E+02

1.18E+03

1.13E+03

Average

9.36E+02

1.26E+03

1.26E+03

1.25E+03

1.30E+03

1.01E+03

4.29E+03

1.33E+03

1.23E+03

SD

5.73E+01

4.96E+01

5.32E+01

8.54E+01

7.04E+01

6.22E+01

1.19E+03

8.26E+01

4.37E+01

f8

Best

8.75E+02

1.07E+03

1.06E+03

9.74E+02

9.59E+02

8.99E+02

8.59E+02

1.07E+03

1.04E+03

Average

9.21E+02

1.11E+03

1.12E+03

1.03E+03

1.02E+03

9.61E+02

1.37E+03

1.12E+03

1.10E+03

SD

2.62E+01

1.99E+01

2.12E+01

2.46E+01

2.51E+01

4.16E+01

1.68E+02

2.65E+01

2.67E+01

f9

Best

2.09E+03

5.18E+03

7.68E+03

5.69E+03

6.20E+03

1.34E+03

1.00E+04

5.04E+03

5.13E+03

Average

3.52E+03

8.85E+03

1.06E+04

8.59E+03

7.51E+03

4.46E+03

1.50E+04

9.25E+03

1.14E+04

SD

8.96E+02

1.65E+03

1.05E+03

1.27E+03

1.11E+03

2.28E+03

2.48E+03

2.41E+03

2.35E+03

f10

Best

3.85E+03

7.34E+03

7.69E+03

6.28E+03

5.96E+03

4.44E+03

4.93E+03

8.60E+03

7.27E+03

Average

5.14E+03

8.92E+03

8.70E+03

7.80E+03

7.37E+03

6.96E+03

6.21E+03

9.51E+03

8.12E+03

SD

7.73E+02

3.96E+02

3.50E+02

5.80E+02

8.46E+02

1.44E+03

6.32E+02

5.11E+02

3.77E+02

f11

Best

1.19E+03

2.57E+03

3.56E+03

1.38E+03

2.27E+03

1.28E+03

1.35E+03

4.48E+03

3.24E+03

Average

1.26E+03

4.22E+03

5.28E+03

2.19E+03

3.82E+03

1.38E+03

1.91E+03

1.18E+04

6.44E+03

SD

3.23E+01

9.46E+02

1.11E+03

5.28E+02

9.31E+02

5.50E+01

9.17E+02

3.70E+03

2.16E+03

f12

Best

2.65E+06

8.42E+08

1.26E+09

2.31E+07

5.69E+07

5.42E+06

3.41E+05

3.42E+08

8.45E+08

Average

1.38E+07

2.67E+09

3.88E+09

1.44E+08

4.57E+08

4.21E+07

4.20E+06

1.16E+09

1.47E+09

SD

9.36E+06

1.02E+09

1.14E+09

7.19E+07

2.57E+08

3.56E+07

6.29E+06

6.36E+08

5.55E+08

f13

Best

1.23E+04

5.76E+08

5.87E+08

2.37E+05

1.68E+06

2.06E+05

1.98E+04

1.07E+07

1.64E+08

Average

2.63E+04

1.37E+09

1.51E+09

2.17E+06

1.30E+07

2.08E+06

1.63E+07

3.46E+08

9.58E+08

SD

1.45E+04

5.20E+08

6.76E+08

2.97E+06

1.02E+07

3.41E+06

3.52E+07

3.12E+08

4.91E+08

f14

Best

1.22E+04

1.44E+05

4.44E+05

6.09E+04

1.55E+05

7.65E+03

1.16E+04

9.30E+04

9.18E+03

Average

2.27E+05

1.04E+06

1.25E+06

1.13E+06

2.21E+06

2.57E+05

2.11E+05

2.31E+06

7.30E+05

SD

1.87E+05

7.10E+05

7.27E+05

9.47E+05

2.27E+06

2.61E+05

1.74E+05

1.78E+06

6.25E+05

f15

Best

7.28E+03

5.83E+06

6.36E+06

3.22E+04

3.87E+04

3.59E+04

2.71E+04

1.71E+05

2.51E+05

Average

1.42E+04

4.39E+07

2.91E+07

2.87E+05

6.37E+06

2.18E+05

2.52E+05

1.12E+07

7.93E+07

SD

3.59E+03

4.01E+07

2.41E+07

2.64E+05

7.07E+06

2.22E+05

3.27E+05

1.91E+07

5.37E+07

f16

Best

2.04E+03

3.93E+03

3.74E+03

2.70E+03

3.19E+03

2.14E+03

2.03E+03

3.95E+03

3.41E+03

Average

2.84E+03

4.44E+03

4.23E+03

3.59E+03

4.33E+03

2.97E+03

3.25E+03

4.49E+03

4.03E+03

SD

3.28E+02

2.10E+02

2.45E+02

4.91E+02

5.50E+02

3.48E+02

6.90E+02

2.90E+02

3.11E+02

f17

Best

1.83E+03

2.37E+03

2.29E+03

1.99E+03

2.24E+03

1.89E+03

1.79E+03

2.65E+03

2.40E+03

Average

2.24E+03

2.91E+03

2.84E+03

2.47E+03

2.71E+03

2.33E+03

2.35E+03

3.00E+03

2.80E+03

SD

2.22E+02

1.92E+02

1.61E+02

2.68E+02

3.23E+02

2.04E+02

3.89E+02

2.23E+02

2.05E+02

f18

Best

5.21E+04

3.98E+06

1.40E+06

4.82E+05

2.07E+05

1.49E+05

2.03E+05

5.71E+05

9.28E+05

Average

6.11E+05

1.53E+07

1.25E+07

5.32E+06

1.03E+07

3.17E+06

2.24E+06

2.52E+07

8.03E+06

SD

7.60E+05

7.94E+06

8.79E+06

5.31E+06

1.23E+07

2.56E+06

1.87E+06

1.52E+07

4.97E+06

f19

Best

1.53E+04

3.44E+07

1.28E+07

2.22E+05

4.04E+05

4.41E+04

2.97E+05

4.11E+05

2.52E+06

Average

4.43E+05

1.12E+08

7.79E+07

1.64E+06

1.21E+07

1.01E+06

1.24E+06

2.28E+07

9.84E+07

SD

3.45E+05

6.03E+07

5.14E+07

1.44E+06

1.41E+07

8.83E+05

1.00E+06

4.25E+07

8.57E+07

30

Table 10. Statistical results of the RUN and eight advanced optimizers on CEC-BC-2017 (Continued)

RUN

CGSCA

SCADE

BMWOA

BWOA

OBLGWO

CMAES

GL25

CLPSO

f20

Best

2.27E+03

2.71E+03

2.65E+03

2.40E+03

2.44E+03

2.27E+03

2.53E+03

2.96E+03

2.63E+03

Average

2.56E+03

2.95E+03

2.99E+03

2.76E+03

2.81E+03

2.62E+03

3.15E+03

3.26E+03

2.87E+03

SD

1.70E+02

1.36E+02

1.52E+02

1.85E+02

1.94E+02

1.86E+02

3.46E+02

1.64E+02

9.18E+01

f21

Best

2.40E+03

2.57E+03

2.57E+03

2.49E+03

2.56E+03

2.42E+03

2.33E+03

2.57E+03

2.53E+03

Average

2.44E+03

2.62E+03

2.62E+03

2.56E+03

2.64E+03

2.49E+03

2.59E+03

2.62E+03

2.60E+03

SD

2.52E+01

2.45E+01

2.80E+01

4.40E+01

5.41E+01

5.34E+01

2.67E+02

2.59E+01

2.39E+01

f22

Best

2.30E+03

4.08E+03

4.96E+03

2.55E+03

3.49E+03

2.33E+03

6.23E+03

3.31E+03

4.30E+03

Average

3.31E+03

5.39E+03

6.48E+03

5.68E+03

7.74E+03

3.33E+03

8.15E+03

5.31E+03

7.40E+03

SD

1.86E+03

1.23E+03

1.08E+03

3.15E+03

1.86E+03

1.97E+03

1.32E+03

2.11E+03

1.83E+03

f23

Best

2.74E+03

3.02E+03

3.01E+03

2.87E+03

2.95E+03

2.76E+03

2.94E+03

2.99E+03

2.96E+03

Average

2.80E+03

3.09E+03

3.09E+03

2.98E+03

3.19E+03

2.85E+03

4.22E+03

3.10E+03

3.09E+03

SD

2.95E+01

3.74E+01

4.62E+01

7.05E+01

1.16E+02

5.76E+01

5.82E+02

6.85E+01

4.95E+01

f24

Best

2.90E+03

3.19E+03

3.18E+03

3.04E+03

3.07E+03

2.94E+03

3.07E+03

3.12E+03

3.09E+03

Average

2.98E+03

3.25E+03

3.25E+03

3.13E+03

3.28E+03

2.99E+03

3.12E+03

3.24E+03

3.25E+03

SD

4.61E+01

4.25E+01

3.36E+01

6.49E+01

9.54E+01

3.23E+01

2.04E+01

6.14E+01

4.78E+01

f25

Best

2.89E+03

3.30E+03

3.35E+03

2.99E+03

3.10E+03

2.90E+03

2.88E+03

3.34E+03

3.44E+03

Average

2.93E+03

3.70E+03

3.81E+03

3.08E+03

3.20E+03

2.95E+03

2.89E+03

3.72E+03

3.77E+03

SD

2.67E+01

2.36E+02

2.47E+02

5.70E+01

7.47E+01

2.82E+01

6.37E+00

2.51E+02

2.22E+02

f26

Best

2.80E+03

6.36E+03

7.36E+03

3.74E+03

4.71E+03

3.56E+03

2.80E+03

7.35E+03

6.52E+03

Average

4.50E+03

8.02E+03

8.21E+03

6.82E+03

8.33E+03

5.73E+03

5.39E+03

8.47E+03

7.92E+03

SD

1.27E+03

5.81E+02

3.96E+02

1.22E+03

1.12E+03

7.41E+02

1.84E+03

5.37E+02

5.68E+02

f27

Best

3.25E+03

3.41E+03

3.39E+03

3.25E+03

3.33E+03

3.22E+03

3.35E+03

3.51E+03

3.43E+03

Average

3.31E+03

3.52E+03

3.57E+03

3.33E+03

3.47E+03

3.25E+03

3.51E+03

3.66E+03

3.58E+03

SD

3.57E+01

6.53E+01

8.54E+01

6.37E+01

1.52E+02

1.57E+01

3.47E+02

1.01E+02

7.67E+01

f28

Best

3.23E+03

4.08E+03

4.48E+03

3.39E+03

3.50E+03

3.27E+03

3.19E+03

3.95E+03

4.25E+03

Average

3.28E+03

4.76E+03

5.03E+03

3.50E+03

3.82E+03

3.35E+03

3.23E+03

4.88E+03

4.95E+03

SD

2.06E+01

4.47E+02

3.53E+02

7.26E+01

2.00E+02

3.69E+01

3.00E+01

4.09E+02

4.03E+02

f29

Best

3.69E+03

4.67E+03

5.18E+03

4.25E+03

4.31E+03

3.84E+03

3.42E+03

4.91E+03

4.54E+03

Average

4.24E+03

5.29E+03

5.67E+03

5.00E+03

5.45E+03

4.28E+03

3.76E+03

5.56E+03

5.13E+03

SD

2.74E+02

3.17E+02

3.15E+02

5.16E+02

6.21E+02

3.41E+02

2.50E+02

3.28E+02

3.12E+02

f30

Best

3.55E+05

6.81E+07

6.71E+07

1.00E+06

6.87E+06

7.09E+05

7.94E+05

1.76E+07

1.74E+07

Average

3.99E+06

2.19E+08

2.01E+08

8.83E+06

5.03E+07

6.50E+06

3.18E+06

5.03E+07

7.24E+07

SD

2.71E+06

8.81E+07

8.13E+07

4.83E+06

4.07E+07

4.35E+06

2.42E+06

3.73E+07

4.27E+07

31

5.9. Sensitivity analysis of RUN

Table 11 Average ranks of RUN and eight advanced optimizers

based on the Friedman test

Algorithm

Friedman ranking

Rank

RUN

1.33

1

CGSCA

6.53

7

SCADE

7.40

9

BMWOA

4.00

3

BWOA

5.70

5

OBLGWO

2.23

2

CMAES

4.43

4

GL25

7.17

8

CLPSO

6.20

6

32

Fig. 8. Convergence graphs of RUN and eight other algorithms for the selected CEC 2017

benchmark functions

33

The sensitivity analysis of the control parameters of RUN (i.e., a and b) was

performed, which demonstrated that RUN had a very low sensitivity to the parameter

changes. This research evaluated different combinations of the control parameters on

34 mathematical test functions for designing RUN, including two groups, 14 unimodal

and multimodal test functions (group 1) and 20 test functions of CEC-BC-2017 (group

2). In this regard, the values of each parameter were defined as a = [5, 10, 20,

30, 40] and b = [4, 8, 12, 16, 20]. Since each parameter had 5 values, there were 25

combinations of the design parameters. Each combination was evaluated by the

average fitness values obtained from 30 different runs. Fig. 9(a) illustrates the mean

rank values of the two groups, and Fig. 9(b) presents the average rank values of the two

groups. Accordingly, the best rank belongs to C13 (a = 20 and b = 12), and the rank of

C19 is very close to C13. Also, the ranks for most combinations are very close,

indicating that the proposed algorithm is not very sensitive to the parameter changes.

34

Fig. 9. Sensitivity analysis of RUN, (a) ranks of uni- and multi-modal test functions

and CEC-2017 (b) average ranks of all combinations

6. Engineering benchmark problems

Four engineering benchmark problems were selected in this study to evaluate

the performance of the proposed RUN algorithm. Solving such engineering design

problems by utilizing specific optimization algorithms is a suitable way to test their

capabilities (Heidari, Mirjalili, et al., 2019). The results obtained by RUN were

compared with those of different well-known optimizers suggested in previous studies.

It is worth noting that the population size and the maximum number of iterations were

30 and 500, respectively, for all problems.

35

6.1. Rolling element bearing design problem

The primary goal of this problem is to maximize the fatigue life, which is a

function of the dynamic load-carrying capacity. It has ten variables and nine constraints

for modeling and geometric-based limitations. The problem is described in detail by

Gupta et al. (2007). The problem is described in detail in (Gupta, et al., 2007). The

related mathematical formulation is detailed in Appendix A.

Fig. 10 displays the schematic view of the rolling element bearing design

problem.

Fig. 10. Rolling element bearing design problem

The results of RUN were compared with those of the GA (Gupta, et al., 2007),

teaching-learning-based optimization (TLBO) (Rao, et al., 2011), passing vehicle search

(PVS) (Savsani & Savsani, 2016), and HHO (Heidari, Mirjalili, et al., 2019) algorithms.

Table 12 presents the statistical results from RUN, GA, TLBO, PVS, and HHO

optimizers, indicating that RUN achieved the best fitness value with significant

progress. The optimal variables of the problem for the five optimizers are shown in

Table 13.

Table 12. Statistical results from RUN, TLBO, GA, PVS, and HHO for the rolling element

bearing design problem

RUN

GA (Gupta, et al.,

2007)

TLBO (Rao, et

al., 2011)

PVS (Savsani &

Savsani, 2016)

HHO (Heidari,

Mirjalili, et al.,

2019)

Best

83680.47

81843.30

81859.74

81859.59

83011.88

Mean

82025.24

NA*

81438.99

80803.57

NA

SD

977.95

NA

NA

NA

NA

*NA: Not Available

𝐷

𝐵𝑤

D

𝑑

𝑑

𝑟

𝑟𝑖

36

6.2. Speed reducer design problem

In this problem, the weight of speed reducer is maximized (Mezura-Montes &

Coello, 2005). The mathematical formulation of this problem is detailed in Appendix

A. The numbers of variables and constraints of this problem were 7 and 11,

respectively, and the schematic is depicted in Fig. 11.

Fig. 11. Speed reducer design problem

RUN's optimal results were compared with the CS results (Gandomi, et al.,

2013), HGSO (Hashim, et al., 2019), GWO, and WOA optimizers. Table 14 gives the

results of these optimization algorithms for this problem. It can be observed that RUN

achieved the best solution and outperformed the compared optimizers. In addition, the

optimal variables of the problem are tabulated in Table 15.

Table 13. Comparison of the results from RUN, TLBO, GA, PVS, and HHO for the rolling element

bearing design problem

Variables

RUN

TLBO (Rao, et al.,

2011)

GA (Gupta, et al.,

2007)

PVS (Savsani &

Savsani, 2016)

HHO

(Heidari,

Mirjalili, et al.,

2019)

21.59796

21.42559

21.42300

21.42559

21.0000

125.2142

125.7191

125.71710

125.71906

125.0000

0.51500

0.51500

0.51500

0.51500

0.51500

0.51500

0.51500

0.51500

0.51500

0.51500

11.4024

11.0000

11.0000

11.0000

11.0920

0.40059

0.42426

0.41590

0.40043

0.4000

0.61467

0.63394

0.65100

0.68016

0.6000

0.30530

0.30000

0.30004

0.30000

0.3000

0.02000

0.06885

0.02230

0.07999

0.0504

0.63665

0.79994

0.75100

0.70000

0.6000

𝑥

𝑥

𝑥

𝑥

𝑥

𝑥

𝑥

37

Table 15. Comparison of the results from RUN, CS, HGSO, GWO, and WOA

for the speed reducer design problem

Variables

RKO

CS (Gandomi,

et al., 2013)

HGSO

(Hashim, et al.,

2019)

GWO

(Hashim, et al.,

2019)

WOA

(Hashim, et

al., 2019)

3.5001

3.5015

3.4970

3.5000

3.4210

0.7000

0.7000

0.7100

0.7000

0.7000

17.000

17.000

17.020

17.000

17.000

7.0000

7.6050

7.6700

7.3000

7.3000

7.8000

7.8181

7.8100

7.8000

7.8000

3.3500

3.3520

3.3600

2.9000

2.9000

5.2900

5.2875

5.2850

2.9000

5.0000

Fitness

2996.73

3000.98

2997.10

2998.83

2998.40

6.3. Three-bar truss problem

The objective of this problem is to minimize the weight of a three-bar truss

(Cheng & Prayogo, 2014; Gandomi, et al., 2013), which is one of the widely-used

engineering problems in previous studies. Fig. 12 displays this problem's shape, in

which the main variables include the areas of bars 1, 2, and 3. The mathematical

formulation (i.e., objective function and constraints) of the problem is detailed in

Appendix A.

Table 14. Statistical results from RUN, CS, HGSO, GWO, and WOA for the speed

reducer design problem

RKO

CS (Gandomi,

et al., 2013)

HGSO

(Hashim, et al.,

2019)

GWO

(Hashim, et al.,

2019)

WOA

(Hashim, et al.,

2019)

Best

2996.348

NA

2996.4

2998.545

2998.134

Mean

2996.348

3007.2

2996.9

2998.832

2998.445

SD

7.63E-09

4.96E+00

4.39E-05

1.86E-06

1.94E-06

38

Fig. 12. Three-bar truss problem

The results of RUN were compared with those of MVO (Mirjalili, et al., 2016),

grasshopper optimization algorithm (GOA) (Mirjalili, et al., 2018), moth-flame

optimization (MFO) (Mirjalili, 2015b), mine blast algorithm (MBA) (Sadollah, et al.,

2013), CS (Gandomi, et al., 2013), and HHO (Heidari, Mirjalili, et al., 2019). Table 16

displays the results acquired from RUN and the six other optimizers, revealing that the

proposed RUN yielded better results than the other optimizers. Furthermore, the

optimized variables obtained by the seven optimization algorithms are shown in Table

17.

Table 16. Comparison of statistical results of RUN with literature for the three-bar truss

problem

RUN

MVO (S.

Mirjalili, et

al., 2016)

GOA (S.

Z. Mirjalili,

et al., 2018)

MFO (S.

Mirjalili,

2015b)

MBA

(Sadollah,

Bahreininejad,

Eskandar, &

Hamdi, 2013)

CS

(Gandomi,

et al., 2013)

HHO

(Heidari,

Mirjalili, et

al., 2019)

Best

263.8958

263.8958

263.8958

263.8955

263.8958

263.9715

263.8958

Mean

263.89768

NA

NA

NA

263.897996

264.0669

NA

SD

2.30E-03