Access to this full-text is provided by MDPI.
Content available from Mathematics
This content is subject to copyright.
Academic Editor: Ioannis G. Tsoulos
Received: 1 January 2025
Revised: 15 January 2025
Accepted: 19 January 2025
Published: 26 January 2025
Citation: Qian, Z.; Zhang, Y.; Pu, D.;
Xie, G.; Pu, D.; Ye, M. A New Hybrid
Improved Kepler Optimization
Algorithm Based on Multi-Strategy
Fusion and Its Applications.
Mathematics 2025,13, 405. https://
doi.org/10.3390/math13030405
Copyright: © 2025 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license
(https://creativecommons.org/
licenses/by/4.0/).
Article
A New Hybrid Improved Kepler Optimization Algorithm Based
on Multi-Strategy Fusion and Its Applications
Zhenghong Qian 1,2, Yaming Zhang 1,*, Dongqi Pu 1,2, Gaoyuan Xie 1, Die Pu 1and Mingjun Ye 1,2
1School of Information Science and Technology, Yunnan Normal University, Kunming 650500, China
2Southwest United Graduate School, Kunming 650092, China
*Correspondence: zhangyaming@ynnu.edu.cn
Abstract: The Kepler optimization algorithm (KOA) is a metaheuristic algorithm based on
Kepler’s laws of planetary motion and has demonstrated outstanding performance in mul-
tiple test sets and for various optimization issues. However, the KOA is hampered by the
limitations of insufficient convergence accuracy, weak global search ability, and slow con-
vergence speed. To address these deficiencies, this paper presents a multi-strategy fusion
Kepler optimization algorithm (MKOA). Firstly, the algorithm initializes the population us-
ing Good Point Set, enhancing population diversity. Secondly, Dynamic Opposition-Based
Learning is applied for population individuals to further improve its global exploration
effectiveness. Furthermore, we introduce the Normal Cloud Model to perturb the best
solution, improving its convergence rate and accuracy. Finally, a new position-update
strategy is introduced to balance local and global search, helping KOA escape local optima.
To test the performance of the MKOA, we uses the CEC2017 and CEC2019 test suites for
testing. The data indicate that the MKOA has more advantages than other algorithms in
terms of practicality and effectiveness. Aiming at the engineering issue, this study selected
three classic engineering cases. The results reveal that the MKOA demonstrates strong
applicability in engineering practice.
Keywords: Kepler optimization algorithm; good point set; normal cloud model; opposition-
based learning
MSC: 68W50
1. Introduction
With the rapid advancement of technology and the increasing complexity of engi-
neering practice, the scale and complexity of optimization challenges are progressively
escalating. These practical optimization issues typically feature multimodality and high
dimensionality, accompanied by a multitude of local optima and tight, highly nonlinear
constraints, such as feature selection [
1
], image processing [
2
], wireless sensor networks [
3
],
UAV route planning [
4
], and job shop scheduling problems [
5
]. Conventional optimiza-
tion techniques, including the gradient descent, Newton method, and conjugate gradient
algorithms, typically solve problems by constructing precise mathematical models. These
classical algorithms are computationally inefficient and computationally intensive, making
it a powerless choice to utilize traditional approaches to address optimization issues [
6
].
Metaheuristic algorithms (MAs) are widely regarded as a common approach to solve
optimization issues because of their flexibility, practicality, and robustness [7].
MAs are divided into four categories: evolutionary algorithms (EAs), swarm intel-
ligence algorithms (SIAs), optimization algorithms derived from human life activities,
Mathematics 2025,13, 405 https://doi.org/10.3390/math13030405
Mathematics 2025,13, 405 2 of 30
and optimization algorithms based on physical law [
8
]. The categorization of MAs is
presented in Figure 1.
Meta-heuristic Algorithms
Evolutionary Algorithms Swarm Intelligence Algorithms Base on Human Life Activities Base on Physical Laws
GA DE
COA LEA
GA DE
COA AEO
PSO ACO
DBO NOA
TLSO PO
QSA VPLA
SA GSA
EVO SCA
Figure 1. Classification of metaheuristic algorithms.
Natural evolution processes inspire the design of the EAs. The genetic algorithm
(GA) [9]
is among the most famous and classic evolutionary algorithms, and its variants
are continuously studied and proposed [
10
,
11
]. GAs are founded on Darwin’s theory of
natural selection, where the optimal population is arrived at through iterative processes
involving selection, crossover, and mutation. This category of algorithms is also extensive
in scope; The coronavirus optimization algorithm (COA) [
12
] models the spread of the
coronavirus starting from patient zero, taking into account the probability of reinfection
and the implementation of social measures. In addition, differential evolutionary algo-
rithms (DEs) [
13
], the invasive tumor growth optimization algorithm (ITGO) [
14
], the love
evolution algorithm (LEA) [15], and so on belong to the category of EAs.
The SIAs are primarily inspired by the behaviors of biological groups, applying the
characteristics of certain animals in nature, such as their survival strategies and behav-
ioral patterns, to the development of these algorithms. An example is particle swarm
optimization (PSO), which draws inspiration from the predatory behavior of birds [
16
].
In a standard PSO algorithm, one updates the particle’s position using both the globally
optimal particle’s position and its own optimal (local) position. The movement of the entire
swarm transitions from a disordered state to an ordered one, ultimately converging with all
particles clustering at the optimal position. Dorigo introduced the ant colony optimization
(ACO) algorithm in 1992, which draws inspiration from how ants forage [
17
]. The ACO
algorithm mimics the process in which ants leave pheromone trails while foraging, guiding
other ants in selecting their paths. In this process, the shortest route was determined accord-
ing to greatest pheromone concentration. The SIAs also include the gray wolf algorithm
(GWO) [
18
], Harris hawk algorithm (HHO) [
19
], whale optimization algorithm (WOA) [
20
],
sparrow search algorithm (SSA) [
21
], sea-horse optimizer (SHO) [
22
], seagull optimization
algorithm (SOA) [
23
], dung beetle optimizer (DBO) [
24
], nutcracker optimizer algorithm
(NOA) [25], and marine predators algorithm (MPA) [26].
The third kind of metaheuristics is optimization algorithms based on human life
activities, which is inspired by human production processes and daily life. Teaching–
Learning-Based Optimization (TLBO) [
27
] is the most well-known algorithm of this type.
The TLBO simulates the conventional process of teaching. The entire optimization process
encompasses two stages. During the teacher stage, each student studies from the best
individual. At the learner stage, each student absorbs knowledge from their peers in a
Mathematics 2025,13, 405 3 of 30
random process. The inspiration for the political algorithm (PO) [
28
] comes from people’s
political behavior. The volleyball premier league algorithm (VPLA) [
29
] simulates the
interaction and the dynamic competition between diverse volleyball teams. The queuing
search algorithm (QSA) [30] draws inspiration from human queuing activities.
As the last type of MAs, optimization algorithms grounded in physical laws are
inspired by algorithms abstracted from mathematics and physics. For instance, simu-
lated annealing (SA) [
31
], the energy valley optimizer (EVO) [
32
], the light spectrum opti-
mizer
(LSO) [33],
the gravitational search algorithm (GSA) [
34
], central force optimization
(CFO) [35], and the sine–cosine algorithm (SCA) [36] belong to this category.
Among the many MAs available, the Kepler optimization algorithm (KOA) [
37
]
is an optimization algorithm grounded in physical laws. The test results of KOA in
multiple test sets demonstrate that it is an effective algorithm for dealing with many
optimization problems, proving its superior performance. The KOA shows outstanding
performance thanks to several key mechanisms: it uses the planetary orbital velocity
adjustment mechanism to achieve a balance among exploration and exploitation, simulates
the characteristics of the orbital motion of planets in the solar system by dynamically
adjusting the search direction to avoid local optimal, and adopts an elite mechanism to
ensure that the planets and the sun reach the most favorable position. Currently, the KOA
has been utilized to address many practical issues in real life. Hakmi and his team [
38
]
adopted the KOA to address the combined heat and power units’ economic dispatch in
power systems. Houssein et al. [
39
] developed an improved KOA (I-KOA) and applied it
to feature selection in liver disease classification. In the field of CXR image segmentation at
different threshold levels, Abdel Basset and colleagues [
40
] used the KOA for optimization.
In addition, the KOA has also been applied in the field of photovoltaic research [
41
,
42
].
After a comprehensive review and comparative analysis of the relevant academic literature,
apparently, the KOA shows exceptional performance and competitiveness in addressing
complex optimization issues.
However, according to the No Free Lunch (NFL) theorem [
43
], no single MA can
effectively address every optimization issue. This motivates us to continuously innovate
and refine existing metaheuristic algorithms to address diverse problems. Therefore, the
KOA has been undergoing continuous improvements since its introduction, with the aim
of maximizing its capability to solve various engineering application problems. Regarding
the shortcomings of theKOA, this paper continues to explore new improvement strategies
based on previous work and introduces a multi-strategy fusion KOA algorithm (MKOA).
1.1. Paper Contributions
The summary is listed below:
•A multi-strategy fusion Kepler optimization algorithm (MKOA) is proposed.
•
In the testing of CEC2017 and CEC2019, the proposed MKOA is validated to be better
than comparison algorithms.
•
Three real-world engineering optimization challenges are addressed using the proposed
algorithm, which highlights the advantages of the algorithm in engineering practice.
1.2. Paper Structure
The layout is outlined below. In Section 2, the principles of the KOA are introduced.
In Section 3, the MKOA is proposed, and four improved strategies are introduced. The sta-
tistical results of the MKOA in CEC2017 and CEC2019 are presented in Section 4. The imple-
mentation of the MKOA in three practical engineering optimization scenarios is discussed
in Section 5. Finally, the last chapter gives a summary.
Mathematics 2025,13, 405 4 of 30
2. Principle of Kepler Optimization Algorithm
Abdel Basset et al., drawing inspiration from the laws of celestial motion, proposed the
KOA [
37
]. This algorithm is specifically designed to address single objective optimization
issues and continuous optimization issues. Its core mechanism simulates the elliptical orbit
motion of planets around the sun, efficiently searching to find the best solution. The KOA
has powerful global optimization capability and high solving precision when dealing with
these problems. We will elaborate on the mathematical model of the KOA in detail next.
2.1. Initialization
During the initialization process, the KOA initializes Nplanets randomly in the entire
solution domain, each with ddimensions. Its mathematical model is given by
Xj
i=Xj
i,lb +r×Xj
i,ub −Xj
i,lb ,(i=1, 2, . . . . . . , N
j=1, 2, . . . . . . , d(1)
where
Xj
i
are the position of the
ith
planet in the
jth
dimension;
Xj
i,lb
and
Xj
i,ub
denote the
minimum and maximum bounds of the
jth
dimension.
r
represents a randomly selected
number with a value between 0 and 1; in addition, the KOA also needs to initialize the
orbital eccentricity eusing Equation (2), as well as orbital period Tthrough Equation (3).
ei=r,i=1, 2, . . . . . . , N(2)
Ti=|rn|,i=1, 2, . . . . . . , N(3)
where
rn
represents a value which is the result of a random sample from a Gaussian
distribution.
2.2. Defining the Gravitational Force (F)
The gravity between the planet and the star affects the planet’s orbital velocity. Specif-
ically, the orbital velocity of the planet exhibits a pattern: it increases when the planet
approaches the star; otherwise, the orbital velocity decreases. The gravitational model is
shown below:
Fgi(t)=ei×µ(t)×Ms×mi
R2
i+λ
+r1(4)
where
ei
is the eccentricity, and
µ
denotes the universal gravitational parameter;
r1
is a
randomly selected value among [0, 1];
λ
is a constant value;
Ms
and
mi
denote the mass
of the sun and the planet, respectively, and the mathematical representations are given by
Equations (7) and (8).
Ms
and
mi
denote their respective normalized forms.
Ri
represents
the normalized value of the Euclidean distance
Ri
among the sun
Xs
and the planet
Xi
.
The Riis given here:
Ri(t)=∥Xs(t)−Xi(t)∥2=v
u
u
t
d
∑
j=1XS,j(t)−Xi,j(t)2(5)
Ri=Ri(t)−min(R(t))
max(R(t))−min(R(t)) (6)
Ms=f its(t)−worst(t)
∑N
k=1(f itk(t)−worst(t)) (7)
Mathematics 2025,13, 405 5 of 30
mi=r2f iti(t)−worst(t)
∑N
k=1(f itk(t)−worst(t)) (8)
where
f its(t)=best(t)=min
k∈1,2,...,Nf itk(t)(9)
worst(t)=max
k∈1,2,...,Nf itk(t)(10)
where
r2
represents a randomly selected value among [0, 1]. The mathematical model of
µ(t)is given below:
µ(t)=µ0×exp(−γt
Tmax
)(11)
where
µ0
and
γ
are predefined values; in addition,
t
is the current number of cycles, and
the maximum cycle count is Tmax .
2.3. Calculating an Object’s Velocity
The orbital velocity increases when the planet approaches the star; otherwise, the or-
bital velocity decreases. When an object approaches the star, the gravity becomes exceed-
ingly potent. As a result, each planet attempts to accelerate in order to resist the intense
gravity of the star. The description of the mathematical model is provided below:
vi(t)=
δ×2r4×
Xi−
Xb+¨
δ×
Xa−
Xb+(1−Ri−norm (t))
×σ×
U1×
r5×
Xi,ub −
Xi,lb ,i f Ri−norm (t)≤0.5
r4×κ×
Xa−
Xi+(1−Ri−norm (t))
×σ×U2×
r5×r3×
Xi,ub −
Xi,lb ,else
(12)
δ=
U×M×κ,¨
δ=1−
U×
M×κ(13)
κ=µ(t)×(mi+Ms)×2
Ri(t) + ε−1
ai(t) + ε1
2(14)
M=(r3×(1−r4)+r4),
M=(r3×(1−
r5)+
r5)(15)
U=
0, i f
r5≤
r6
1, else (16)
σ=
1, i f r4≤0.5
−1, else (17)
U1=
0, i f
r5≤r4
1, else ,U2=
0, i f r3≤r4
1, else (18)
where
vi(t)
is the
ith
planet’s speed;
r3
and
r4
are probabilistic variables with a uniform
distribution over the interval 0 to 1;
r5
and
r6
are two binary vectors, each component of
which can only be 0 or 1;
Xa
and
Xb
denote planets that are randomly chosen from the
current solutions; and
σ
is a number that is randomly chosen to be either
−
1 or 1 within
the band for adjusting the exploration direction.
ai(t)
is the semi-major axis of the elliptical
orbit for object iat time t. The following is the formula used to compute ai(t):
ai(t)=r3×T2
i×µ(t)×(Ms+mi)
4π21
3(19)
Mathematics 2025,13, 405 6 of 30
2.4. Escaping from Local Optimum
Most planets revolve counter-clockwise near the star. However, some planets deviate
from this norm by orbiting in the reverse direction. In original KOA, this behavior is
leveraged to modify the exploration direction, helping the KOA escape local optima.
The algorithm introduces a control variable,
σ
, which dynamically modifies the direction of
the search, thereby controlling the orbit direction of a planet around the star. This approach
increases the likelihood that agents will effectively explore the entire search space.
2.5. Updating Objects’ Positions
Planet follow Kepler’s law and orbit the sun periodically along their respective ellip-
tical trajectory. Planets rotate near the sun. Over time, they first approach the star and
then gradually move further away. The KOA elaborates on this behavior through two key
steps: exploration and exploitation. In the KOA, exploration operations are carried out
when planets are located further away from the star; when the planet approaches the star,
exploitation operations are carried out. Its mathematical model is given by
Xi(t+1)=
Xi(t)+σ×
Vi(t)+
U×(Fgi(t)+|r|)×
XS(t)−
Xi(t)(20)
2.6. Updating the Distance from the Sun
To enhance the KOA’s search ability, the KOA simulates the fluctuating normal dis-
tance behavior between planets and the sun over time. When the planet is located near the
star, the exploitation operator is activated to enhance the convergence rate of the algorithm;
on the other hand, when the planet is located further away from the star, the exploration
operator is activated to escape local optima. To apply the idea to the KOA, we introduce
the control parameter
h
, which varies with the number of runs. When control parameter
h
is greater, the solution space is expanded by using detection operators to explore better
solutions; otherwise, it focuses on the area surrounding the current optimal individual to
maximize the benefits. This is expressed mathematically as Equation (21):
Xi(t+1)=
Xi(t)×
U1+1−
U1×
Xi(t)+
Xs(t)+
Xa(t)
3+h×
Xi(t)+
Xs(t)+
Xa(t)
3−
Xb(t)!! (21)
h=1
eηr(22)
where
r
represents a randomly selected value among [0, 1]; the variable
η
denotes a linear
diminishing factor between 1 and −2, while ηhas the following definition:
η=(a2−1)×r4+1 (23)
where a2represents the cyclic control parameter, as detailed in Equation (24) below.
a2=−1− t%Tmax
TC
Tmax
TC !(24)
where TC is a constant value, and % represents the remainder operation.
Mathematics 2025,13, 405 7 of 30
2.7. Elitism Mechanism
At this stage, we implemented an elite mechanism to assure that the star as well as
the planets reach their most favorable positions. Equation (25) provides a summary of
this strategy.
Xi(t+1)=
Xi(t+1),i f f
Xi(t+1)≤f
Xi(t)
Xi(t),else (25)
2.8. KOA Pseudocode
Algorithm 1presents the KOA’s complete pseudocode.
Algorithm 1 KOA Pseudocode
Input: N,Tmax,µ,γ,¯
T
1: Initialization
2: Evaluation
3: Identify the current best solution
Xs
4: while t < Tmax do
5: Update ei,µ(t),best(t), as well as worst(t)
6: for i=1 : Ndo
7: Calculate ¯
Riusing Equation (6)
8: Calculate Fgi using Equation (4)
9: Calculate
viusing Equation (12)
10: Generates two independent random variables, denoted as rand r1
11: if r>r1then
12: Use Equation (20) to update planets’ positions
13: else
14: Use Equation (21) to update planets’ positions
15: end if
16: Apply an elitist mechanism to select the optimal position, using Equation (25)
17: end for
18: t=t+1;
19: end while
Output:
Xs
3. The Proposed Multi-Strategy Fusion Kepler Optimization
Algorithm (MKOA)
Resembling other metaheuristic algorithms, the KOA also faces challenges such as
premature convergence as well as a weak global search ability, especially when addressing
high-dimensional and complicated issues [
44
]. Therefore, to improve these defects of the
KOA, this article integrates the Good Point Set strategy, the Dynamic Opposition-Based
Learning strategy, the Normal Cloud Model strategy, and the New Exploration strategy
to amplify the original KOA. The strategies proposed will be discussed thoroughly in the
following sections in this part.
3.1. Good Point Set Strategy
In MAs, the algorithm’s global search ability is influenced by the location of the initial
population. The traditional Kepler optimization algorithm (KOA) utilizes the random
method for population initialization, which leads to drawbacks such as uneven population
distribution. This paper introduces a Good Point Set (GPS) strategy to improve the KOA
by ensuring the initial solution is evenly distributed across the search domain.
The concept of GPS was introduced by Hua and Wang in 1978 [
45
]. This strategy
enables the search space to be as uniformly covered as possible, even with a relatively
Mathematics 2025,13, 405 8 of 30
small number of sample points [
46
]. Therefore, it has been used by many scholars in
population initialization.
The basic definition of the GPS strategy is as given below: Define
Gs
as a standard
cube with side length one in an
s
-dimensional Euclidean space. If
r∈Gs
, which may be
represented as
pn(k)=nnr(n)
1ko,nr(n)
2ko, . . . , nr(n)
sko, 1 ≤k≤no(26)
φ(n)=C(r,ε)n−1+ε(27)
where
pn(k)
is the point set, as well as if
pn(k)
deviation satisfies Equation (27), it is called
Good Point Set.
C(r,ε)n−1+ε
is a fixed value that is only related to
ε
and
r
;
nr(n)
sko
denotes
the decimal fraction of the value; ndenotes the sample size; and ris computed as follows:
r=2×cos2πk
p k=1, 2, . . . , n(28)
where pis the smallest prime that satisfies s≤(p−3)/2.
The GPS strategy is applied to population initialization. The initialization equation of
the ith individual is given by
Xj
i=Xj
i,lb +nr(i)
jko×Xj
i,ub −Xj
i,lb ,(i=1, 2, . . . . . . , N
j=1, 2, . . . . . . , d(29)
Figure 2and Figure 3, respectively, show the comparison between the two-dimensional
point sets generated by the GPS method and the random method when the population size
is 500. Figure 4illustrates a frequency distribution histogram of the initialized population
using the two methods. As shown in the figure, the GPS strategy is more uniform than
the randomly generated strategy. In addition, when the sample size is consistent, the dis-
tribution of the GPS strategy is similarly consistent, which means that the GPS strategy
is stable.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Figure 2. GPS strategy initializing the population.
Mathematics 2025,13, 405 9 of 30
Figure 3. Randomly initializing the population.
(a) GPS strategy
0 0.2 0.4 0.6 0.8 1
Range interval
0
10
20
30
40
50
60
70
Number of occurrences
(b) Random method
Figure 4. Comparison chart of frequency distribution histogram.
3.2. Dynamic Opposition-Based Learning Strategy
Tizhoosh [
47
] came up with Opposition-Based Learning (OBL) in 2005, drawing
inspiration from the idea of opposition. Until now, the OBL strategy and its enhancements
have been widely applied in various optimization algorithms [
48
–
50
]. The OBL strategy
has the following definition: ˆ
Xi=LB +UB −Xi(30)
where
Xi
and
ˆ
Xi
are the original solution and the opposite solution, respectively; the
symbols LB and UB represent the lower as well as the upper limit of the search domain.
Between the initial solution and the reverse solution, the OBL strategy will select
the solution with a higher fitness value for the next population iteration. Although it can
effectively improve the diversity as well as the quality of the population, the constant
distance maintained between the original solution and the generated opposite solution
results in a lack of certain randomness. Therefore, it may hinder the optimization of the
algorithm throughout the entire iteration process. To address the shortcoming of the OBL
Mathematics 2025,13, 405 10 of 30
strategy, this study proposes a Dynamic Opposition-Based Learning (DOBL) strategy to
further improve the diversity as well as the quality of the population.
The DOBL strategy [
49
] introduces a dynamic boundary mechanism grounded in the
original OBL strategy, thereby improving the problem of insufficient randomness in the
original OBL strategy. The mathematical model of DOBL strategy is given by
ˆ
Xt
i,j=at
j+bt
j−Xt
i,j(31)
where
ˆ
Xt
i,j
and
Xt
i,j
, respectively, represent the reverse solution and the original solution
for the
j
-th dimension of the
i
-th vector during the
t
-th iteration;
at
j
and
bt
j
, respectively,
represent the lower and upper values in the
j
-th dimension during the
t
-th iteration.
The mathematical models of at
jand bt
jare shown as
at
j=minXt
j,bt
j=maxXt
j(32)
The KOA’s search phase is altered using the DOBL strategy to boost the diversity
as well as the quality of the population. In the KOA, the DOBL strategy simultaneously
considers the original solution and reverse solution. Sorting according to fitness values, we
will select only the top N optimal solutions. Algorithm 2presents the pseudocode of DOBL.
Algorithm 2 DOBL strategy Pseudocode
Input: D,N,X// D: dimensionality; N: population size; X: original solutions
1: for i=1 : Ndo
2: for j=1 : Ddo
3: ˆ
Xt
i,j=at
j+bt
j−Xt
i,j// Generate opposite solution through Equation (31)
4: end for
5: end for
6:
Evaluate population fitness values (Including the original solution as well as the oppo-
site solution)
7: X←Select N best individuals from set {X,ˆ
X}
Output: X
3.3. Normal Cloud Model Strategy
In evaluating the significance of the current best individual position, we found that
in the KOA, the information utilization of the optimal solution is not sufficient, which
often makes the KOA prone to local optima. Thus, this study introduces a Normal Cloud
Model (NCM) strategy [
51
] to perturb the current best solution. The principle is to perturb
the optimal individual by utilizing the randomness and fuzziness of the NCM strategy to
extend the search space of the KOA. Its definitions are given below.
Suppose
X
is a quantitative domain, and that
F
is a qualitative notion defined over
X
.
x
is an element of
X
and
x
represents a random instantiation of notion
F
.
µF(x)ϵ[0, 1]
is the degree of certainty of
x
with respect to
F
.
µ:X→[
0, 1
]
,
∀x∈X
,
x→µ(x)
. So,
x
is
referred to as a cloud droplet, and the distribution of
x
on the entire quantitative domain
X
is called a cloud. In order to reflect the uncertainty of the cloud, we use three parameters to
explain it: expectation,
Ex
; entropy,
En
; and hyper-entropy,
He
. The explicit explanations
are detailed below:
(1)
Ex represents the anticipated distribution of a cloud droplet.
(2)
En
denotes the uncertainty of cloud drop, and reflects the range of distribution and
the spread of the cloud droplet.
(3)
He
is the uncertainty quantification of entropy, which describes cloud thickness and
reflects the degree to which the qualitative concept deviates from a normal distribution.
Mathematics 2025,13, 405 11 of 30
If
x
satisfies
x∼NEx,En′2
, where
En′2=NEn,He2
, and
En =
0, the degree of
certainty yfor the qualitative variable xis calculated as follows:
y=e−(x−Ex)2
2(En′)2(33)
At this point, the distribution of
x
over the entire quantitative domain
X
is called a
normal cloud.
y
denotes the anticipated curve for the NCM strategy. Figure 5illustrates
images of normal clouds with different parameter features. From Figure 5a,b, it is possible
to observe that with the increase in
He
, the dispersion of cloud droplets also simultaneously
increases; from Figure 5a,c, it is possible to observe that as
En
increases, the distribution
range of cloud droplets also expands. Therefore, Figure 5indirectly reveals the fuzziness as
well as the randomness characteristics of cloud droplets.
1 1.5 2 2.5 3 3.5 4 4.5 5
Cloud Drop
0
0.2
0.4
0.6
0.8
1
Certainty Degree
Ex=3 En=0.5 He=0.05
(a)
1 2 3 4 5 6
Cloud Drop
0
0.2
0.4
0.6
0.8
1
Certainty Degree
Ex=3 En=0.5 He=0.1
(b)
-101234567
Cloud Drop
0
0.2
0.4
0.6
0.8
1
Certainty Degree
Ex=3 En=1 He=0.05
(c)
Figure 5. Normal cloud model distribution diagram generated by different parameters.
The process of converting qualitative concepts to quantitative representations is called
a forward normal cloud generator (NCG). By setting appropriate parameters, the NCG
generates cloud droplets that fundamentally follow a normal distribution. The process of
cloud droplet generation by means of a cloud generator is outlined below:
x=NCG(Ex,En,He,N c)(34)
In Equation (34),
Nc
represents the anticipated number of cloud droplets. Introducing
the above NCM into the KOA perturbs the optimal individual. The NCG mathematical
model is represented by Equation (35):
ˆ
xt
best =NCGxt
best,En,He,D(35)
where
xt
best
is the optimal solution of the population during the
t
-th iteration, and
D
refers
to the dimension of the optimal individual.
3.4. New Exploration Strategy
We apply a new location-update strategy to the original KOA. This strategy effec-
tively balances local and global exploration by integrating the current solution, optimal
solution, and suboptimal solution, thereby improving the convergence accuracy as well as
maintaining its convergence speed.
The modified Equations (12) and (21) are described by Equations (36) and (37), respectively.
Mathematics 2025,13, 405 12 of 30
vi(t)=
δ×2r4×
Xi−
Xm+¨
δ×
Xm−
Xb+(1−Ri−norm (t))
×σ×
U1×
r5×
Xi,ub −
Xi,lb ,i f Ri−norm (t)≤0.5
r4×κ×
Xm−
Xi+(1−Ri−norm (t))
×σ×U2×
r5×r3×
Xi,ub −
Xi,lb ,else
(36)
Xi(t+1)=
Xi(t)×
U1+1−
U1×
Xm+h×
Xm−
Xb(t) (37)
where
Xmis defined as follows:
Xm=
Xcs +
Xos +
Xss
3!(38)
where
Xcs
represents the current solution;
Xos
denotes the optimal solution; and
Xss
represents the suboptimal solution.
3.5. The MKOA Implementation Process
The MKOA first initializes the population using Good Point Set, enhancing population
diversity and introducing a new OBL strategy, “Dynamic Opposition-Based Learning”,
to enhance its global exploration ability. Additionally, the NCM method is utilized for
the current best solution, introducing perturbation and mutation to enhance local escape
capability. Finally, the KOA uses a new position-update strategy to enhance solution quality.
All in all, Algorithm 3presents the pseudocode of the MKOA, and the flow chart of the
MKOA appears in Figure 6.
Algorithm 3 Pseudocode of the MKOA
Input: N,Tmax,µ,γ,¯
T
1: Initialization the population by introducing Good Point Set through Equation (29)
2: Evaluation fitness values for initial population
3: Identify the current best solution
Xs
4: while t < Tmax do
5: Update ei,µ(t),best(t), and worst(t)
6: Update the population location using DOBL strategy by Equation (31)
7: for i=1 : Ndo
8: Calculate ¯
Riusing Equation (6)
9: Calculate Fgi using Equation (4)
10: Calculate
viusing Equation (36)
11: Generates two independent random variables, denoted as rand r1
12: if r>r1then
13: Use Equation (20) to update planets’ positions
14: else
15: Use Equation (37) to update planets’ positions
16: end if
17: Apply an elitist mechanism to select the optimal position, using Equation (25)
18: end for
19: Evaluation fitness values and determine the best solution
Xs
20: Disturb
Xsusing the NCM strategy by Equation (35)
21: t=t+1;
22: end while
Output:
Xs
Mathematics 2025,13, 405 13 of 30
Figure 6. Flowchart of MKOA.
4. Experiments and Discussion
In this part, we evaluate the MKOA’s performance on a range of test suites and
conduct experiments to analyze the effect of improvement strategies. In addition, the
Wilcoxon rank-sum test was used for an analysis of differences among competitors and
overall performance.
4.1. Experimental Environment
The MKOA was implemented using Matlab 2022a version. The simulation experiment
environment was as follows: 64-bit Windows10 operating system, AMD Ryzen 5 5600 CPU
@ 3.50 GHz, 16.00 GB memory.
Mathematics 2025,13, 405 14 of 30
4.2. Competitor Algorithms
To test the performance of the MKOA, the MKOA and comparison algorithms were
tested by CEC2017 test suite and CEC2019 test suite. We compared the MKOA’s perfor-
mance to eight other popular optimization algorithms, including the Kepler optimiza-
tion algorithm (KOA) [
37
], the Harris hawk algorithm (HHO) [
19
], the dung beetle op-
timizer (DBO) [
24
], the whale optimization algorithm (WOA) [
20
], the gray wolf algo-
rithm (GWO) [
18
], the sparrow search algorithm (SSA) [
21
], the giant trevally optimizer
(GTO) [
52
], and the velocity pausing particle swarm optimization (VPPSO) [
53
]. Table 1
gives default settings information for these competing algorithms;
N
and
Tmax
are fixed at
30 and 500, respectively. These algorithms incorporate diverse advanced search techniques
and structures. Most of them offer significant advantages and have been recommended in
recent years. Comparing the proposed algorithm with eight other algorithms highlights its
superiority in addressing similar problems, demonstrating its effectiveness and potential to
enhance future research efforts.
Table 1. Key parameter configuration information of the competitors and the MKOA.
Algorithm Parameter Value
KOA/MKOA Tc,M0,λ3, 0.1, 15
HHO E0,E1[−1, 1],[0, 2]
DBO RDB,EDB,FDB,SDB 6, 6, 7, 11
WOA a,a2,b[0, 2],[−1, −2], 1
GWO a(Convergence parameter) [2, 0]
SSA PD,SD,ST 0.2, 0.1, 0.8
GTO step,β0.01, 1.5
VPPSO c1,c21.5, 1.5
4.3. CEC2017 Test Suite Experimental Results
To assess the optimization effectiveness of the MKOA, we selected the CEC2017 test
function, a complex benchmark function for testing optimization algorithms [
54
]. CEC2017
includes 29 test functions. It consists of four types: Unimodal (F1 and F3), multimodal
(F4–F10), hybrid (F11–F20), and composite (F21–F30). In addition, because of the instability
of the F2 function, it was removed from the set. Therefore, we did not conduct experiments
on F2. F1–F10 functions were utilized to analyze the MKOA’s optimization ability, while
the more complicated F11–F30 functions were applied to assess the MKOA’s capability to
escape local optimum.
4.3.1. CEC2017 Statistical Results Analysis
This section utilizes CEC2017 to assess the algorithm’s performance, maintaining
a dimension fixed at 30. To assure the effectiveness of the experiment, all algorithms
were carried out twenty times independently. Table 2presents the experimental results.
Performance metrics include best value, average value, and standard deviation. Bold
represents the optimal result.
Through the analysis of Table 2, it can be found that among the 29 functions, the MKOA
successfully obtained 22 best solutions, accounting for 75.8%. For F1–F9 test functions, the
MKOA obtained the best average fitness value among five test functions (F3–F4, F6–F7, F9).
Regarding F1, F5, and F8, the MKOA performed better than the original KOA. In addition,
for hybrid and composite functions F10–F30, the MKOA ranks first in the average count
of optimal solutions (F11–F20, F22–F25, F27, F29–F30), and it also showed strong competi-
tiveness in most of the other function tests. In summary, the MKOA demonstrates strong
capability to handle complicated issues and has better robustness.
Mathematics 2025,13, 405 15 of 30
Table 2. CEC2017 test results.
MKOA KOA HHO DBO WOA GWO SSA GTO VPPSO
min 1.41 ×1034.59 ×1031.72 ×1082.54 ×1073.11 ×1099.66 ×1081.53 ×1029.14 ×1092.87 ×102
F1 avg 6.04 ×1031.06 ×1055.35 ×1083.32 ×1084.98 ×1093.22 ×1095.81 ×1031.61 ×1010 1.35 ×108
std 4.51 ×1031.89 ×1053.21 ×1082.28 ×1081.84 ×1091.50 ×1096.39 ×1032.47 ×1092.15 ×108
min 2.72 ×1045.61 ×1043.57 ×1048.39 ×1041.51 ×1054.07 ×1043.55 ×1046.52 ×1043.48 ×104
F3 avg 4.10 ×1047.53 ×1045.38 ×1049.21 ×1042.55 ×1056.17 ×1044.94 ×1047.49 ×1046.02 ×104
std 8.38 ×1031.29 ×1049.41 ×1039.83 ×1035.89 ×1049.97 ×1039.05 ×1033.93 ×1031.75 ×104
min 4.10 ×1024.25 ×1025.73 ×1024.95 ×1028.73 ×1025.41 ×1024.04 ×1021.24 ×1034.85 ×102
F4 avg 4.69 ×1025.01 ×1027.32 ×1026.45 ×1021.37 ×1036.62 ×1025.00 ×1022.76 ×1035.44 ×102
std 3.79 ×1014.11 ×1019.78 ×1011.17 ×1024.50 ×1028.98 ×1014.11 ×1019.13 ×1023.94 ×101
min 6.17 ×1026.80 ×1027.10 ×1026.66 ×1027.85 ×1025.59 ×1026.74 ×1027.64 ×1026.13 ×102
F5 avg 6.81 ×1027.11 ×1027.55 ×1027.79 ×1029.10 ×1026.12 ×1027.72 ×1028.17 ×1026.70 ×102
std 3.44 ×1012.37 ×1013.78 ×1017.23 ×1017.46 ×1012.19 ×1014.32 ×1012.20 ×1013.52 ×101
min 6.02 ×1026.02 ×1026.60 ×1026.22 ×1026.64 ×1026.05 ×1026.33 ×1026.69 ×1026.34 ×102
F6 avg 6.03 ×1026.04 ×1026.67 ×1026.47 ×1026.89 ×1026.13 ×1026.44 ×1026.74 ×1026.46 ×102
std 1.22 ×1002.36 ×1004.14 ×1001.77 ×1011.50 ×1014.44 ×1009.47 ×1003.82 ×1007.67 ×100
min 9.10 ×1029.25 ×1021.21 ×1039.42 ×1021.20 ×1038.64 ×1021.05 ×1031.22 ×1038.85 ×102
F7 avg 9.43 ×1029.59 ×1021.30 ×1031.00 ×1031.36 ×1039.44 ×1021.20 ×1031.34 ×1031.06 ×103
std 2.31 ×1013.09 ×1018.26 ×1016.41 ×1019.29 ×1015.22 ×1019.02 ×1014.92 ×1011.19 ×102
min 8.52 ×1029.51 ×1029.38 ×1029.59 ×1021.03 ×1038.81 ×1028.82 ×1029.70 ×1028.95 ×102
F8 avg 9.55 ×1029.91 ×1029.90 ×1021.05 ×1031.11 ×1039.03 ×1029.58 ×1021.02 ×1039.42 ×102
std 5.92 ×1012.32 ×1012.94 ×1015.68 ×1015.21 ×1011.38 ×1014.12 ×1012.12 ×1012.25 ×101
min 9.98 ×1029.70 ×1027.94 ×1032.43 ×1036.66 ×1031.12 ×1034.33 ×1037.26 ×1032.63 ×103
F9 avg 1.32 ×1031.59 ×1039.28 ×1035.30 ×1031.01 ×1042.91 ×1035.17 ×1038.82 ×1034.34 ×103
std 4.48 ×1028.83 ×1021.14 ×1031.91 ×1033.62 ×1031.62 ×1034.82 ×1029.12 ×1028.66 ×102
min 7.11 ×1038.11 ×1035.19 ×1034.74 ×1036.55 ×1033.49 ×1034.56 ×1035.72 ×1034.08 ×103
F10 avg 7.69 ×1038.29 ×1036.43 ×1036.62 ×1037.93 ×1035.73 ×1035.55 ×1036.59 ×1035.52 ×103
std 4.82 ×1021.28 ×1029.97 ×1021.22 ×1039.84 ×1021.95 ×1036.62 ×1027.65 ×1021.19 ×103
min 1.18 ×1031.19 ×1031.39 ×1031.33 ×1034.41 ×1031.38 ×1031.16 ×1033.53 ×1031.32 ×103
F11 avg 1.21 ×1031.27 ×1031.54 ×1032.52 ×1031.27 ×1042.57 ×1031.27 ×1034.20 ×1031.63 ×103
std 3.69 ×1013.98 ×1011.31 ×1022.07 ×1033.82 ×1031.07 ×1035.01 ×1014.62 ×1023.05 ×102
min 9.88 ×1041.46 ×1051.18 ×1072.29 ×1062.20 ×1081.27 ×1072.44 ×1051.43 ×1091.16 ×107
F12 avg 2.39 ×1051.27 ×1065.53 ×1071.31 ×1085.77 ×1081.21 ×1087.39 ×1052.89 ×1093.82 ×107
std 1.42 ×1052.09 ×1063.28 ×1071.92 ×1084.31 ×1081.11 ×1084.92 ×1058.19 ×1081.83 ×107
min 1.58 ×1032.45 ×1033.55 ×1052.88 ×1045.11 ×1068.43 ×1043.93 ×1032.79 ×1081.71 ×104
F13 avg 6.12 ×1032.66 ×1041.78 ×1079.97 ×1061.48 ×1079.93 ×1061.52 ×1041.76 ×1091.22 ×105
std 4.97 ×1031.92 ×1044.23 ×1071.65 ×1079.02 ×1062.31 ×1071.66 ×1041.03 ×1095.93 ×104
min 1.49 ×1031.56 ×1034.71 ×1041.98 ×1042.29 ×1053.71 ×1047.97 ×1037.04 ×1057.60 ×103
F14 avg 1.68 ×1033.60 ×1031.48 ×1061.07 ×1053.37 ×1062.51 ×1053.36 ×1041.71 ×1062.85 ×105
std 1.19 ×1023.30 ×1031.18 ×1061.08 ×1052.22 ×1063.06 ×1053.31 ×1046.14 ×1052.90 ×105
min 1.71 ×1032.29 ×1033.95 ×1041.41 ×1041.26 ×1055.05 ×1041.86 ×1032.21 ×1051.40 ×104
F15 avg 2.92 ×1038.47 ×1031.23 ×1051.09 ×1073.89 ×1062.70 ×1066.10 ×1039.14 ×1054.98 ×104
std 1.12 ×1036.11 ×1035.54 ×1043.48 ×1073.34 ×1062.54 ×1063.76 ×1038.31 ×1054.70 ×104
Mathematics 2025,13, 405 16 of 30
Table 2. Cont.
MKOA KOA HHO DBO WOA GWO SSA GTO VPPSO
min 2.35 ×1032.67 ×1033.15 ×1032.86 ×1033.23 ×1032.05 ×1032.26 ×1033.60 ×1032.37 ×103
F16 avg 2.88 ×1033.25 ×1033.71 ×1033.36 ×1034.03 ×1032.91 ×1032.93 ×1034.47 ×1032.99 ×103
std 3.33 ×1023.53 ×1025.28 ×1023.52 ×1025.84 ×1024.86 ×1024.44 ×1027.61 ×1023.71 ×102
min 1.82 ×1031.83 ×1032.25 ×1032.15 ×1032.36 ×1031.83 ×1031.96 ×1032.33 ×1031.86 ×103
F17 avg 1.94 ×1032.13 ×1032.73 ×1032.61 ×1032.71 ×1032.06 ×1032.45 ×1032.92 ×1032.23 ×103
std 1.38 ×1021.74 ×1023.38 ×1022.69 ×1022.64 ×1021.65 ×1022.74 ×1023.56 ×1022.68 ×102
min 3.29 ×1048.69 ×1043.09 ×1053.99 ×1051.18 ×1054.36 ×1055.33 ×1042.60 ×1054.31 ×104
F18 avg 8.42 ×1042.16 ×1051.95 ×1061.78 ×1061.49 ×1072.06 ×1066.91 ×1057.49 ×1061.30 ×106
std 4.82 ×1042.01 ×1051.83 ×1061.12 ×1061.63 ×1072.11 ×1066.61 ×1054.85 ×1061.49 ×106
min 2.00 ×1032.11 ×1031.63 ×1053.55 ×1032.37 ×1061.49 ×1042.31 ×1031.06 ×1061.49 ×104
F19 avg 3.52 ×1031.23 ×1041.92 ×1063.58 ×1061.78 ×1071.49 ×1067.48 ×1034.09 ×1061.39 ×106
std 1.49 ×1031.10 ×1041.98 ×1065.49 ×1061.77 ×1072.99 ×1064.96 ×1031.99 ×1061.23 ×106
min 2.19 ×1032.45 ×1032.45 ×1032.38 ×1032.55 ×1032.20 ×1032.18 ×1032.68 ×1032.28 ×103
F20 avg 2.39 ×1032.61 ×1032.86 ×1032.78 ×1032.83 ×1032.50 ×1032.74 ×1033.06 ×1032.53 ×103
std 1.60 ×1021.17 ×1022.07 ×1022.07 ×1022.46 ×1022.33 ×1022.82 ×1021.92 ×1021.42 ×102
min 2.42 ×1032.46 ×1032.50 ×1032.50 ×1032.55 ×1032.40 ×1032.47 ×1032.61 ×1032.40 ×103
F21 avg 2.45 ×1032.49 ×1032.60 ×1032.58 ×1032.65 ×1032.43 ×1032.51 ×1032.65 ×1032.46 ×103
std 2.63 ×1011.94 ×1014.78 ×1015.70 ×1016.54 ×1011.67 ×1012.92 ×1012.95 ×1014.13 ×101
min 2.29 ×1032.29 ×1036.24 ×1032.37 ×1033.92 ×1032.44 ×1032.30 ×1036.78 ×1032.34 ×103
F22 avg 2.30 ×1033.99 ×1037.48 ×1033.56 ×1037.70 ×1034.75 ×1036.76 ×1038.47 ×1035.38 ×103
std 2.68 ×1002.69 ×1036.50 ×1021.88 ×1031.92 ×1032.92 ×1031.77 ×1035.98 ×1021.94 ×103
min 2.75 ×1032.78 ×1033.12 ×1032.85 ×1033.03 ×1032.72 ×1032.85 ×1033.15 ×1032.75 ×103
F23 avg 2.79 ×1032.85 ×1033.29 ×1033.00 ×1033.12 ×1032.82 ×1032.94 ×1033.38 ×1032.85 ×103
std 2.82 ×1014.02 ×1011.18 ×1021.11 ×1029.15 ×1016.88 ×1017.33 ×1011.42 ×1025.79 ×101
min 2.90 ×1032.97 ×1033.28 ×1032.99 ×1033.10 ×1032.87 ×1032.96 ×1033.29 ×1032.92 ×103
F24 avg 2.93 ×1033.03 ×1033.54 ×1033.18 ×1033.25 ×1033.01 ×1033.08 ×1033.56 ×1033.00 ×103
std 3.76 ×1013.15 ×1011.84 ×1021.11 ×1021.12 ×1028.73 ×1011.06 ×1021.58 ×1024.82 ×101
min 2.88 ×1032.90 ×1032.99 ×1032.92 ×1033.13 ×1032.94 ×1032.88 ×1033.18 ×1032.93 ×103
F25 avg 2.89 ×1032.92 ×1033.03 ×1032.99 ×1033.22 ×1033.03 ×1032.90 ×1033.31 ×1032.97 ×103
std 7.08 ×1001.18 ×1012.50 ×1016.92 ×1018.14 ×1017.30 ×1012.30 ×1016.23 ×1012.72 ×101
min 4.07 ×1035.14 ×1036.59 ×1035.98 ×1035.78 ×1034.71 ×1032.90 ×1037.33 ×1033.35 ×103
F26 avg 5.08 ×1035.52 ×1038.72 ×1037.09 ×1038.49 ×1035.03 ×1036.02 ×1038.57 ×1035.34 ×103
std 6.96 ×1022.16 ×1029.98 ×1029.18 ×1021.09 ×1032.49 ×1021.70 ×1036.92 ×1021.01 ×103
min 3.19 ×1033.19 ×1033.44 ×1033.28 ×1033.33 ×1033.23 ×1033.23 ×1033.51 ×1033.24 ×103
F27 avg 3.23 ×1033.24 ×1033.63 ×1033.35 ×1033.47 ×1033.28 ×1033.25 ×1034.00 ×1033.33 ×103
std 8.56 ×1001.85 ×1011.98 ×1026.09 ×1011.14 ×1023.48 ×1012.10 ×1013.29 ×1026.33 ×101
min 3.21 ×1033.24 ×1033.39 ×1033.33 ×1033.65 ×1033.35 ×1033.21 ×1033.89 ×1033.26 ×103
F28 avg 3.25 ×1033.27 ×1033.52 ×1033.68 ×1033.91 ×1033.43 ×1033.23 ×1034.33 ×1033.35 ×103
std 2.04 ×1012.83 ×1017.59 ×1017.88 ×1022.10 ×1029.14 ×1012.51 ×1012.42 ×1024.36 ×101
min 3.70 ×1033.87 ×1034.14 ×1034.31 ×1034.97 ×1033.63 ×1033.76 ×1035.11 ×1034.16 ×103
F29 avg 3.83 ×1034.22 ×1035.17 ×1034.69 ×1035.80 ×1033.90 ×1034.22 ×1035.99 ×1034.63 ×103
std 1.06 ×1021.86 ×1026.41 ×1022.76 ×1025.51 ×1022.45 ×1022.80 ×1025.70 ×1023.21 ×102
min 6.43 ×1032.35 ×1041.98 ×1064.51 ×1041.38 ×1071.82 ×1061.18 ×1042.84 ×1078.13 ×105
F30 avg 1.94 ×1046.90 ×1041.77 ×1077.22 ×1068.21 ×1071.18 ×1072.62 ×1041.52 ×1081.08 ×107
std 1.09 ×1045.14 ×1041.19 ×1078.38 ×1065.78 ×1071.03 ×1071.97 ×1049.58 ×1078.55 ×106
Bold is the best result of all the algorithms.
Mathematics 2025,13, 405 17 of 30
4.3.2. Analysis of CEC2017 Convergence Curve
Figure 7presents the convergence curves of the MKOA, KOA, HHO, DBO, WOA,
GWO, SSA, GTO, and VPPSO for the CEC2017 test set in a 30-dimensional setting. The spe-
cific analysis is given below:
0 100 200 300 400 500
Iteration
102
104
106
108
1010
1012
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F1
0 100 200 300 400 500
Iteration
104
106
108
1010
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F3
0 100 200 300 400 500
Iteration
103
104
105
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F4
0 100 200 300 400 500
Iteration
700
800
900
1000
1100
1200
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F5
0 100 200 300 400 500
Iteration
620
640
660
680
700
720
740
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F6
0 100 200 300 400 500
Iteration
1000
1500
2000
2500
3000
3500
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F7
0 100 200 300 400 500
Iteration
1000
1100
1200
1300
1400
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F8
0 100 200 300 400 500
Iteration
103
104
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F9
F10
0 100 200 300 400 500
Iteration
103
104
105
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F11
0 100 200 300 400 500
Iteration
104
106
108
1010
1012
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F12
0 100 200 300 400 500
Iteration
104
106
108
1010
1012
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F13
0 100 200 300 400 500
Iteration
103
104
105
106
107
108
109
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F14
0 100 200 300 400 500
Iteration
102
104
106
108
1010
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F15 F16
0 100 200 300 400 500
Iteration
103
104
105
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F17
0 100 200 300 400 500
Iteration
104
105
106
107
108
109
1010
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F18
0 100 200 300 400 500
Iteration
102
104
106
108
1010
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F19
0 100 200 300 400 500
Iteration
2600
2800
3000
3200
3400
3600
3800
4000
4200
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F20
0 100 200 300 400 500
Iteration
2500
2600
2700
2800
2900
3000
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F21
Figure 7. Cont.
Mathematics 2025,13, 405 18 of 30
F22
0 100 200 300 400 500
Iteration
2800
3000
3200
3400
3600
3800
4000
4200
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F23
0 100 200 300 400 500
Iteration
3000
3200
3400
3600
3800
4000
4200
4400
4600
4800
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F24
0 100 200 300 400 500
Iteration
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
2.2
Average fitness
#104
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F25
0 100 200 300 400 500
Iteration
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
Average fitness
#104
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F26
0 100 200 300 400 500
Iteration
3500
4000
4500
5000
5500
6000
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F27 F28
0 100 200 300 400 500
Iteration
104
105
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F29
0 100 200 300 400 500
Iteration
104
105
106
107
108
109
1010
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F30
Figure 7. Convergence curve of the nine algorithms in CEC2017.
•
For the simple peak function F1, the MKOA is similar to the SSA with respect to
convergence accuracy and rate. For the simple peak function F3, the MKOA shows
a comparable convergence speed to the other algorithms. However, over time, the
MKOA achieves a superior solution.
•
For the simple multimodal problems, F4–F7 and F9, the MKOA’s superior exploration
capabilities enable it to converge more quickly and reach better positions. For the
multimodal problem, F8, although the MKOA’s performance is not as good as that of
the GWO, it significantly outperforms other algorithms.
•
For the hybrid function, F10, although the optimal position was not obtained, the
MKOA is still a significant improvement over the original KOA. For the hybrid
functions F11 and F17, there is no notable difference in convergence rate among the
algorithms. For the rest of the functions (F12–F16, F18–F19), compared with other
algorithms, the MKOA shows significantly better performance.
•
For the composite functions F25 and F27–F29, the results of all the algorithms are
roughly the same, showing only slight differences. Regarding F20–F24, F26, and F30,
the MKOA has obvious advantages and has always been ahead of competing algo-
rithms. This shows that the MKOA is capable of reaching the optimal solution faster,
improving the efficiency of problem solving and demonstrating superior robustness.
Mathematics 2025,13, 405 19 of 30
4.4. CEC2019 Test Suite Experimental Results
The CEC2019 test function was also employed to further evaluate the MKOA’s per-
formance, with detailed information available in Ref. [
54
]. All parameter settings for the
MKOA are consistent with those mentioned earlier.
4.4.1. CEC2019 Statistical Results Analysis
Table 3displays that the MKOA is superior to the comparison algorithm. In the
evaluation of 10 test functions, the best solution was obtained for 8 of them, accounting
for 80% (F2–F5, F7–F10). It bears mentioning that although the MKOA has the same
optimal solution as the original KOA on functions F2 and F3, its standard deviation is
better, indicating that the MKOA is more stable. For the remaining functions, F1 and F6,
the MKOA also shows its competitiveness in comparison with other algorithms. All in
all, this result shows the effective optimization capabilities of the MKOA and its ability to
closely approach each function’s theoretical ideal value.
Table 3. CEC2019 test results.
MKOA KOA HHO DBO WOA GWO SSA GTO VPPSO
min 4.63 ×1042.51 ×1074.20 ×1044.03 ×1041.10 ×1061.02 ×1063.79 ×1046.45 ×1044.13 ×104
F1 avg 9.14 ×1042.17 ×1085.52 ×1041.15 ×1094.91 ×1010 3.17 ×1084.12 ×1049.07 ×1044.93 ×104
std 6.40 ×1043.57 ×1087.52 ×1032.44 ×1095.11 ×1010 3.51 ×1082.07 ×1032.04 ×1041.18 ×104
min 1.73 ×1011.73 ×1011.74 ×1011.73 ×1011.73 ×1011.73 ×1011.73 ×1011.74 ×1011.73 ×101
F2 avg 1.73 ×1011.73 ×1011.74 ×1011.73 ×1011.74 ×1011.74 ×1011.73 ×1011.76 ×1011.74 ×101
std 4.27 ×10−15 1.71 ×10−91.22 ×10−24.74 ×10−15 1.33 ×10−26.82 ×10−24.59 ×10−15 8.80 ×10−28.51 ×10−2
min 1.27 ×1011.27 ×1011.27 ×1011.27 ×1011.27 ×1011.27 ×1011.27 ×1011.27 ×1011.27 ×101
F3 avg 1.27 ×1011.27 ×1011.27 ×1011.27 ×1011.27 ×1011.27 ×1011.27 ×1011.27 ×1011.27 ×101
std 3.42 ×10−12 8.87 ×10−10 8.24 ×10−66.49 ×10−64.98 ×10−71.29 ×10−55.02 ×10−76.40 ×10−69.21 ×10−13
min 7.66 ×1001.14 ×1011.06 ×1022.29 ×1011.40 ×1023.12 ×1012.89 ×1016.61 ×1027.96 ×100
F4 avg 2.12 ×1012.81 ×1012.28 ×1021.66 ×1023.01 ×1025.29 ×1025.99 ×1011.77 ×1034.09 ×101
std 7.19 ×1001.15 ×1019.30 ×1011.70 ×1021.57 ×1029.98 ×1022.20 ×1018.01 ×1021.95 ×101
min 1.04 ×1001.03 ×1001.74 ×1001.08 ×1001.58 ×1001.08 ×1001.04 ×1001.46 ×1001.11 ×100
F5 avg 1.12 ×1001.22 ×1002.51 ×1001.28 ×100217 ×1001.43 ×1001.15 ×1002.07 ×1001.24 ×100
std 7.98 ×10−21.40 ×10−17.89 ×10−12.32 ×10−16.63 ×10−13.02 ×10−11.08 ×10−15.18 ×10−11.08 ×10−1
min 7.58 ×1008.79 ×1008.24 ×1008.38 ×1008.59 ×1009.98 ×1004.72 ×1008.71 ×1002.14 ×100
F6 avg 9.12 ×1009.78 ×1009.60 ×1001.08 ×1019.92 ×1001.09 ×1016.46 ×1001.02 ×1015.62 ×100
std 6.99 ×10−15.90 ×10−19.95 ×10−11.12 ×1005.71 ×10−15.15 ×10−11.15 ×1008.28 ×10−11.83 ×100
min 8.69 ×1011.41 ×1021.52 ×1021.60 ×1023.51 ×1024.23 ×1015.81 ×1012.47 ×1021.17 ×102
F7 avg 3.20 ×1024.04 ×1023.70 ×1024.32 ×1027.52 ×1023.59 ×1024.42 ×1025.70 ×1023.23 ×102
std 1.30 ×1021.49 ×1021.49 ×1021.96 ×1021.95 ×1022.85 ×1022.55 ×1023.27 ×1022.32 ×102
min 4.09 ×1004.49 ×1004.68 ×1003.65 ×1004.35 ×1004.11 ×1003.62 ×1005.45 ×1003.62 ×100
F8 avg 4.58 ×1005.55 ×1005.73 ×1005.60 ×1005.39 ×1005.52 ×1005.32 ×1005.97 ×1005.09 ×100
std 5.69 ×10−16.28 ×10−16.42 ×10−11.07 ×1006.67 ×10−11.29 ×1009.59 ×10−13.35 ×10−17.77 ×10−1
min 2.36 ×1002.38 ×1002.97 ×1002.43 ×1003.22 ×1003.67 ×1002.41 ×1003.79 ×1002.74 ×100
F9 avg 2.38 ×1002.45 ×1003.09 ×1002.52 ×1004.51 ×1004.57 ×1002.42 ×1001.77 ×1013.45 ×100
std 2.54 ×10−24.62 ×10−21.11 ×10−19.76 ×10−21.12 ×1008.38 ×10−16.03 ×10−22.74 ×1014.86 ×10−1
min 1.28 ×10−21.99 ×1012.01 ×1011.97 ×1012.02 ×1011.99 ×1011.95 ×1012.02 ×1012.00 ×101
F10 avg 1.83 ×1012.00 ×1012.02 ×1012.05 ×1012.04 ×1012.05 ×1012.01 ×1012.03 ×1012.01 ×101
std 6.43 ×1007.87 ×10−21.06 ×10−11.06 ×10−17.93 ×10−27.41 ×10−21.79 ×10−11.05 ×10−11.65 ×10−1
Bold is the best result of all the algorithms.
4.4.2. Analysis of CEC2019 Convergence Curve
Figure 8presents the convergence curves of the MKOA, KOA, HHO, DBO, WOA,
GWO, SSA, GTO, and VPPSO for the CEC2019 under default dimension settings, which
show the stability and convergence of the revised approach. The results indicate that the
MKOA shows excellent performance for most functions. A detailed analysis is given below:
Mathematics 2025,13, 405 20 of 30
0 100 200 300 400 500
Iteration
104
106
108
1010
1012
1014
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F1
100 200 300 400 500
Iteration
101
102
103
104
105
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F2
0 100 200 300 400 500
Iteration
12.7025
12.703
12.7035
12.704
12.7045
12.705
12.7055
12.706
12.7065
12.707
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F3
0 100 200 300 400 500
Iteration
101
102
103
104
105
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F4
0 100 200 300 400 500
Iteration
2
3
4
5
6
7
8
9
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F5
0 100 200 300 400 500
Iteration
4
6
8
10
12
14
16
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F6
0 100 200 300 400 500
Iteration
400
600
800
1000
1200
1400
1600
1800
2000
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F7
0 100 200 300 400 500
Iteration
5
5.5
6
6.5
7
7.5
8
8.5
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F8
0 100 200 300 400 500
Iteration
100
101
102
103
104
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F9
0 100 200 300 400 500
Iteration
20
20.2
20.4
20.6
20.8
21
21.2
Average fitness
MKOA
KOA
HHO
DBO
WOA
GWO
SSA
GTO
VPPSO
F10
Figure 8. Convergence curve of the eight algorithms in CEC2019.
Regarding F4–F5, F7–F8, and F10, we can observe that the MKOA achieves the fastest
convergence rate of the nine algorithms, indicating outstanding performance. In F2, F3,
and F9, most algorithms show similar convergence speed and precision, with minimal
differences. For functions F1 and F6, the MKOA demonstrates a certain level of compet-
itiveness. Although the MKOA does not converge to the optimal position, it is clearly a
significant improvement over the original KOA algorithm.
4.5. Wilcoxon Rank-Sum Test
We specifically utilized the Wilcoxon rank-sum test [
55
] to test whether the MKOA
outperformed the other eight algorithms. The
p
-value represents the significance level,
and when it falls below the 5% threshold, it denotes a significant difference.
Table 4shows the
p
-values of CEC2017 in dimension 30, while Table 5displays the
p
-values corresponding to CEC2019. In addition, the bold font indicates that the algorithm
does not show significant competitiveness in statistical terms. The final row presents
the statistics of the number of
p
-value experimental results between the MKOA and the
comparison algorithm.
By observing the data in Tables 4and 5, we can see that the number of bold annotations
is slightly lower. Thus, the MKOA demonstrates significantly different optimization perfor-
mance compared with other eight algorithms on CEC2017 and CEC2019. The results display
that in most cases, the MKOA’s performance is not inferior to the comparative algorithms.
Mathematics 2025,13, 405 21 of 30
Table 4. p-values of CEC2017.
KOA HHO DBO WOA GWO SSA GTO VPPSO
F1 3.64 ×10−36.75 ×10−86.75 ×10−86.75 ×10−86.75 ×10−86.56 ×10−36.80 ×10−81.60 ×10−5
F3 5.87 ×10−62.56 ×10−36.75 ×10−86.75 ×10−85.19 ×10−51.64 ×10−17.90 ×10−81.44 ×10−2
F4 4.39 ×10−26.75 ×10−82.22 ×10−76.75 ×10−89.21 ×10−81.56 ×10−16.80 ×10−84.17 ×10−5
F5 2.82 ×10−41.20 ×10−66.91 ×10−46.75 ×10−81.77 ×10−62.23 ×10−26.80 ×10−81.33 ×10−1
F6 3.34 ×10−36.75 ×10−86.75 ×10−86.75 ×10−84.54 ×10−76.75 ×10−86.80 ×10−86.80 ×10−8
F7 7.11 ×10−36.75 ×10−88.35 ×10−36.75 ×10−82.23 ×10−21.18 ×10−66.80 ×10−81.79 ×10−4
F8 3.81 ×10−12.89 ×10−11.55 ×10−21.12 ×10−67.90 ×10−86.01 ×10−21.67 ×10−21.81 ×10−5
F9 1.38 ×10−26.75 ×10−87.90 ×10−86.75 ×10−85.09 ×10−46.75 ×10−86.80 ×10−83.42 ×10−7
F10 7.58 ×10−24.62 ×10−71.14 ×10−21.47 ×10−23.11 ×10−46.75 ×10−82.22 ×10−44.54 ×10−6
F11 4.71 ×10−56.75 ×10−86.75 ×10−86.75 ×10−83.42 ×10−72.34 ×10−36.80 ×10−81.43 ×10−7
F12 1.35 ×10−36.75 ×10−89.17 ×10−86.75 ×10−86.75 ×10−85.90 ×10−56.80 ×10−86.80 ×10−8
F13 2.89 ×10−26.75 ×10−81.20 ×10−66.75 ×10−86.75 ×10−86.56 ×10−36.80 ×10−81.92 ×10−7
F14 2.94 ×10−26.75 ×10−86.75 ×10−86.75 ×10−86.75 ×10−81.18 ×10−76.80 ×10−86.80 ×10−8
F15 3.15 ×10−26.75 ×10−83.38 ×10−76.76 ×10−87.90 ×10−81.67 ×10−26.80 ×10−87.90 ×10−8
F16 4.08 ×10−26.72 ×10−61.12 ×10−31.23 ×10−71.98 ×10−46.01 ×10−26.80 ×10−84.41 ×10−1
F17 4.70 ×10−32.36 ×10−62.96 ×10−71.06 ×10−72.29 ×10−14.60 ×10−41.23 ×10−73.60 ×10−2
F18 1.48 ×10−31.66 ×10−76.75 ×10−81.18 ×10−77.89 ×10−71.81 ×10−56.80 ×10−81.20 ×10−6
F19 4.70 ×10−36.75 ×10−81.98 ×10−66.75 ×10−87.90 ×10−84.99 ×10−26.80 ×10−86.80 ×10−8
F20 4.11 ×10−29.75 ×10−61.51 ×10−31.92 ×10−71.58 ×10−11.23 ×10−33.99 ×10−61.81 ×10−1
F21 4.11 ×10−23.88 ×10−71.57 ×10−56.75 ×10−81.79 ×10−42.14 ×10−36.80 ×10−81.90 ×10−1
F22 7.71 ×10−31.12 ×10−61.20 ×10−65.23 ×10−71.12 ×10−65.07 ×10−36.80 ×10−86.80 ×10−8
F23 2.56 ×10−26.75 ×10−83.88 ×10−76.75 ×10−87.64 ×10−21.41 ×10−56.80 ×10−82.73 ×10−1
F24 4.11 ×10−26.75 ×10−86.88 ×10−71.87 ×10−78.36 ×10−41.40 ×10−16.80 ×10−87.64 ×10−2
F25 3.60 ×10−26.75 ×10−83.42 ×10−76.75 ×10−81.12 ×10−72.98 ×10−16.80 ×10−81.23 ×10−7
F26 8.36 ×10−43.02 ×10−75.22 ×10−61.38 ×10−72.32 ×10−12.00 ×10−46.80 ×10−85.56 ×10−3
F27 2.00 ×10−43.02 ×10−74.41 ×10−11.81 ×10−51.58 ×10−11.08 ×10−16.80 ×10−81.43 ×10−7
F28 3.07 ×10−61.19 ×10−73.34 ×10−36.75 ×10−85.23 ×10−76.01 ×10−76.80 ×10−81.66 ×10−7
F29 1.33 ×10−21.38 ×10−71.22 ×10−41.11 ×10−72.13 ×10−16.87 ×10−46.80 ×10−83.42 ×10−7
F30 5.25 ×10−56.75 ×10−81.12 ×10−66.75 ×10−86.75 ×10−81.64 ×10−16.80 ×10−86.80 ×10−8
(W|L) (27|2) (28|1) (28|1) (29|0) (23|6) (21|8) (29|0) (23|6)
The bold font indicates that the algorithm does not show significant competitiveness.
Table 5. p-values of CEC2019.
KOA HHO DBO WOA GWO SSA GTO VPPSO
F1 7.90 ×10−83.75 ×10−43.65 ×10−19.17 ×10−83.50 ×10−61.44 ×10−46.80 ×10−86.80 ×10−8
F2 3.38 ×10−83.38 ×10−81.36 ×10−43.38 ×10−83.43 ×10−81.59 ×10−16.69 ×10−86.69 ×10−8
F3 6.75 ×10−86.75 ×10−84.36 ×10−16.75 ×10−86.75 ×10−83.49 ×10−56.80 ×10−81.94 ×10−1
F4 3.07 ×10−26.75 ×10−81.43 ×10−76.75 ×10−85.23 ×10−76.22 ×10−46.80 ×10−81.41 ×10−5
F5 1.28 ×10−21.19 ×10−73.42 ×10−46.80 ×10−82.22 ×10−41.02 ×10−16.80 ×10−82.75 ×10−2
F6 2.22 ×10−14.38 ×10−16.22 ×10−41.52 ×10−11.20 ×10−63.42 ×10−76.04 ×10−31.58 ×10−6
F7 2.61 ×10−21.38 ×10−12.07 ×10−22.56 ×10−31.53 ×10−19.05 ×10−32.73 ×10−13.60 ×10−2
F8 4.57 ×10−41.18 ×10−35.12 ×10−31.01 ×10−33.37 ×10−13.48 ×10−12.56 ×10−37.71 ×10−3
F9 2.22 ×10−76.75 ×10−81.81 ×10−56.75 ×10−86.75 ×10−85.56 ×10−36.80 ×10−86.80 ×10−8
F10 1.23 ×10−22.62 ×10−23.34 ×10−35.65 ×10−23.81 ×10−41.78 ×10−33.15 ×10−21.79 ×10−4
(W|L) (9|1) (8|2) (8|2) (8|2) (8|2) (7|3) (9|1) (9|1)
The bold font indicates that the algorithm does not show significant competitiveness.
4.6. Ablation Analysis
To affirm the effectiveness of the introduced strategy, this study chose to adopt ablation
experiments for validation. Based on the basic KOA, the variant using only the GPS strategy
is designated as KOA1, the variant using only the DOBL strategy is referred to as KOA2,
the variant using only the NCM strategy is named KOA3, and the variant using only the
New Exploration strategy is termed KOA4. All parameter settings are consistent with those
mentioned earlier, and the only difference is that the number of runs was adjusted to 100.
We conducted the ablation test using CEC2019, and Table 6presents the test results.
Table 6data indicate that the MKOA obtained the greatest number of average values,
demonstrating its excellent performance. In addition, through the data analysis of KOA1,
KOA2, KOA3, and KOA4, it is clear that they have a certain improvement on the basic
Mathematics 2025,13, 405 22 of 30
KOA. Therefore, we can confirm that the proposed improvement strategy is essential for
enhancing the efficiency of the MKOA.
Table 6. Ablation experiment based on CEC2019.
MKOA KOA KOA1 KOA2 KOA3 KOA4
min 4.47 ×1061.09 ×1010 1.49 ×1096.61 ×1075.98 ×1091.25 ×109
F1 avg 7.06 ×1082.77 ×1010 2.49 ×1010 7.39 ×1082.44 ×1010 9.51 ×109
std 8.71 ×1081.48 ×1010 2.39 ×1010 7.28 ×1081.39 ×1010 6.59 ×109
min 1.72 ×1011.80 ×1011.86 ×1011.72 ×1011.79 ×1011.75 ×101
F2 avg 1.73 ×1012.15 ×1011.96 ×1011.75 ×1012.11 ×1011.77 ×101
std 1.83 ×10−42.79 ×1006.59 ×10−14.50 ×10−42.85 ×1003.04 ×10−1
min 1.27 ×1011.27 ×1011.27 ×1011.27 ×1011.27 ×1011.27 ×101
F3 avg 1.27 ×1011.27 ×1011.27 ×1011.27 ×1011.27 ×1011.27 ×101
std 9.79 ×10−71.88 ×10−51.09 ×10−56.06 ×10−69.86 ×10−61.69 ×10−6
min 4.29 ×1016.59 ×1016.31 ×1016.93 ×1017.06 ×1015.16 ×101
F4 avg 6.39 ×1011.09 ×1029.95 ×1019.84 ×1011.02 ×1027.32 ×101
std 1.21 ×1012.47 ×1012.26 ×1011.79 ×1012.85 ×1011.29 ×101
min 1.26 ×1001.68 ×1001.59 ×1001.57 ×1001.65 ×1001.38 ×100
F5 avg 1.61 ×1001.82 ×1001.79 ×1001.78 ×1001.81 ×1001.69 ×100
std 1.68 ×10−16.87 ×10−21.08 ×10−19.39 ×10−29.87 ×10−21.30 ×10−1
min 9.06 ×1008.73 ×1009.02 ×1008.30 ×1008.50 ×1009.78 ×100
F6 avg 1.06 ×1011.14 ×1011.09 ×1011.09 ×1011.09 ×1011.10 ×101
std 1.10 ×1009.36 ×10−17.90 ×10−11.14 ×1009.90 ×10−17.45 ×10−1
min 2.55 ×1026.17 ×1025.64 ×1025.87 ×1025.89 ×1024.47 ×102
F7 avg 6.42 ×1028.81 ×1028.05 ×1028.19 ×1028.31 ×1028.17 ×102
std 1.66 ×1021.69 ×1021.88 ×1021.28 ×1021.89 ×1022.16 ×102
min 5.15 ×1006.10 ×1005.98 ×1004.99 ×1005.47 ×1005.15 ×100
F8 avg 6.22 ×1006.81 ×1006.64 ×1006.44 ×1006.50 ×1006.29 ×100
std 5.74 ×10−13.21 ×10−13.46 ×10−15.87 ×10−14.01 ×10−14.69 ×10−1
min 2.61 ×1002.79 ×1002.96 ×1003.03 ×1003.19 ×1002.67 ×100
F9 avg 3.11 ×1004.11 ×1003.76 ×1003.70 ×1003.88 ×1003.33 ×100
std 2.99 ×10−15.89 ×10−15.84 ×10−14.61 ×10−14.04 ×10−14.59 ×10−1
min 5.76 ×1002.03 ×1012.02 ×1012.03 ×1012.03 ×1012.04 ×101
F10 avg 2.01 ×1012.06 ×1012.05 ×1012.05 ×1012.05 ×1012.05 ×101
std 3.29 ×1007.19 ×10−21.43 ×10−19.58 ×10−21.12 ×10−19.36 ×10−2
Bold is the best result of all the algorithms.
5. Engineering Applications
This section aims to evaluate the MKOA’s capability to handle real-world optimization
challenges by introducing three renowned engineering design issues to test its optimization
performance. The three renowned optimization issues are all single-objective problems,
which refer to finding the best value of the objective function while satisfying a set of
intricate constraints. Once the simultaneous optimization of multiple objective functions is
involved, the issue is transformed into multi-objective optimization. In addition, in engi-
neering optimization issues, we are faced with constrained optimization problems with
variable constraints, which requires effective treatment of constraints. Common constraint
processing techniques include the penalty function approach, the feasibility rule method,
and the multi-objective optimization strategy. We mainly adopt the penalty function ap-
proach to deal with these constraints. All parameter settings are consistent with those
mentioned earlier.
5.1. Three-Bar Truss Design Issue
The three-bar truss is considered a classic engineering practice issue, and it has been
a important case in many research works [
56
]. Figure 9shows the formulaic form of the
truss structure and its associated force distribution. It is designed to decrease the structural
weight while the total load remains constant. To reach this objective, three constraints must
be considered: stress constraints, buckling constraints, and deflection constraints. On the
basis of satisfying these three constraints, the lightest weight is achieved by modifying
Mathematics 2025,13, 405 23 of 30
the two variables
A1
and
A2
. This optimization design reduces engineering costs while
ensuring safety. The relevant mathematical expression is shown below.
PP
ll
l
A1
A2
A3
A1=A3
Figure 9. The schema of the three-bar truss.
Consider the following variable:
X=[x1,x2]=[A1,A2](39)
Minimize
f(x)=100 ×2√2x1+x2(40)
This is subject to
g1(x)=√2x1+x2
√2x2
1+2x1x2
Q−σ≤0
g2(x)=x2
√2x2
1+2x1x2
Q−σ≤0
g3(x)=1
√2x2+x1
Q−σ≤0
(41)
where Q=σ=2 KN/cm2,x1,x2∈[0, 1]
Table 7presents the comparative results, including their optimal weight and the
corresponding optimal variables. As shown in Table 7and Figure 10, when variable xis
(
0.788675, 0.408248
)
, the MKOA reaches the optimal value
f(x) =
263.89584338, and the
average value of running 20 times is also ahead of other algorithms. Obviously, the MKOA
achieves the lowest manufacturing cost when dealing with this issue.
Table 7. Three-bar truss design issue.
Algorithm Optimal Variables Min Weight Mean Weight Std
x1x2
MKOA 0.788675 0.408248 263.89584338 263.89584338 5.19 ×10−12
KOA 0.788674 0.408250 263.89584338 263.89584339 3.18 ×10−9
HHO 0.789306 0.406467 263.89613516 264.33071062 5.34 ×10−1
DBO 0.788417 0.408978 263.89589283 263.90083157 7.17 ×10−3
WOA 0.785100 0.418456 263.90535711 265.50895854 2.24 ×100
GWO 0.789867 0.404891 263.89723175 263.90408986 8.84 ×10−3
SSA 0.788554 0.408592 263.89586446 263.89915725 1.01 ×10−2
GTO 0.801163 0.374511 264.05418464 264.68330951 5.23 ×10−1
VPPSO 0.788370 0.409111 263.89591304 264.68330951 8.76 ×10−3
Bold is the best result of all the algorithms.
Mathematics 2025,13, 405 24 of 30
Figure 10. Boxplot of three-bar truss.
5.2. Design of the Tension/Compression Spring Issue
The core issue, which is to minimize the mass of the tension/compression spring [
57
],
is visually represented in Figure 11. As displayed in Figure 11, to decrease the mass of
the spring while satisfying certain engineering requirements and constraints, we need to
optimize the wire diameter (
d
), average coil diameter (
D
), and the number of active coils
(N). Its mathematical formula is given by
d
N
D
Figure 11. The schema of the tension/compression spring.
Consider the following variable:
X=[x1,x2,x3]=[N,D,d](42)
Minimize
f(x)=(x1+2)x2x2
3(43)
This is subject to
g1(x)=1−x1x3
2
71785x4
3≤0
g2(x)=4x2
2−x2x3
12566x2x3
3−x4
3+1
5108x2
3−1≤0
g3(x)=1−140.45x3
x1x2
2≤0
g4(x)=x2+x3
1.5 −1≤0
(44)
where x1∈[2, 15],x2∈[0.25, 1.3], and x3∈[0.05, 2]
Mathematics 2025,13, 405 25 of 30
As depicted in Table 8and Figure 12, the MKOA successfully found the optimal value,
that is,
f(x)=
0.01266534. Moreover, the average value indicates that the MKOA is also
very competitive in solving this issue.
Table 8. Tension/compression spring design issue.
Algorithm Optimal Variables Min Weight Mean Weight Std
x1x2x3
MKOA 0.0516 0.3550 11.3848 0.01266534 0.01266866 8.30 ×10−6
KOA 0.0518 0.3610 11.0447 0.01267065 0.01270267 4.69×10−5
HHO 0.0516 0.3546 11.4113 0.01266539 0.01370983 1.07 ×10−3
DBO 0.0500 0.3174 14.0277 0.01271905 0.01348284 1.49 ×10−3
WOA 0.0506 0.3318 12.9094 0.01268597 0.01348421 7.74 ×10−4
GWO 0.0520 0.3662 10.7525 0.01267282 0.01269223 2.55 ×10−5
SSA 0.0500 0.3174 14.0277 0.01271905 0.01354629 1.56 ×10−3
GTO 0.0500 0.3172 14.0548 0.01273345 0.01333117 3.86 ×10−4
VPPSO 0.0525 0.37706 10.1932 0.01268247 0.01324062 7.68 ×10−4
Bold is the best result of all the algorithms.
Figure 12. Boxplot of tension/compression spring.
5.3. Pressure Vessel Design Issue
The main goal of this issue is to reduce the production expenses to the lowest possible
level while considering multiple constraints [
58
]. Figure 13 presents its structure, which
involves four variables: thickness of the shell (
Ts
), thickness of the head (
Th
), the inner
radius (
R
), and the length of the cylindrical shell (
L
). The relevant problem model is
shown below.
10/11 4/1110/11 4/1110/11 4/1110/11 4/1110/11 4/11
R
Th
L
TS
Figure 13. The schema of the pressure vessel.
Mathematics 2025,13, 405 26 of 30
Consider the following variable:
X=[x1,x2,x3,x4]=[Ts,Th,R,L](45)
Minimize
f(x)=0.6224x1x3x4+1.7781x2x2
3+3.1661x2
1x4+19.84x2
1x3(46)
This is subject to
g1(x)=−x1+0.0193x3≤0
g2(x)=−x2+0.00954x3≤0
g3(x)=−πx2
3x4−4
3πx3
3+1296000 ≤0
g4(x)=x4−240 ≤0
(47)
where x1,x2∈[0, 99];x3,x4∈[10, 200].
As shown in Table 9and Figure 14, we can conclude that the MKOA ranks first with
an average manufacturing cost of 5888.425089. The results indicate that the MKOA is more
efficient at addressing this issue.
Table 9. Pressure vessel design issue.
Algorithm Optimal Variables Min Weight Mean Weight Std
x1x2x3x4
MKOA 0.7781 0.3846 40.3197 199.9990 5885.411899 5888.425089 5.44 ×100
KOA 0.7787 0.3847 40.3248 200.0000 5890.544374 5994.222923 1.94 ×102
HHO 0.9157 0.4531 47.1860 122.3723 6194.883922 6888.736995 4.77 ×102
DBO 0.7781 0.3846 40.3196 200.0000 5885.332773 6709.779180 6.68 ×102
WOA 0.9844 0.4820 50.2540 96.3426 6393.243404 8011.477556 1.07 ×103
GWO 0.7824 0.3874 40.5406 197.0945 5898.317199 5904.194052 6.60 ×100
SSA 0.7800 0.3855 40.4157 198.6666 5888.510291 6420.005590 5.61 ×102
GTO 0.9676 0.4845 48.3334 112.1966 6509.369529 7449.894467 5.03 ×102
VPPSO 0.7841 0.3876 40.6299 196.5538 5913.753151 6640.017737 4.92 ×102
Bold is the best result of all the algorithms.
Figure 14. Boxplot of pressure vessel.
6. Conclusions and Future Perspectives
This paper proposes a multi-strategy fusion Kepler optimization algorithm (MKOA).
Firstly, the MKOA first initializes the population using Good Point Set, enhancing popula-
tion diversity, and introduces the DOBL strategy, to enhance its global exploration capability.
Mathematics 2025,13, 405 27 of 30
In addition, the MKOA applies the NCM strategy to the optimal individual, improving its
convergence accuracy and speed. In the end, we introduce a new position-update strategy
to balance local and global exploration, helping the KOA escape local optima.
To analyze the capabilities of the MKOA, we mainly use the CEC2017 and CEC2019
test suites for testing. The data indicate that the MKOA has more advantages than other
algorithms in terms of practicality and effectiveness. Additionally, to test the practical
application potential of the MKOA, three classical engineering cases are selected in this
paper. The result reveal that the MKOA demonstrates strong applicability in engineer-
ing applications (Table A1).
In the future, the MKOA will be used in addressing more engineering issues, such as
UAV path planning [59], wireless sensor networks [60], and disease prediction [61].
Author Contributions: Conceptualization, D.P. (Die Pu); methodology, G.X.; software, M.Y.; valida-
tion, D.P. (Dongqi Pu); formal analysis, Y.Z.; investigation, D.P. (Dongqi Pu); resources: D.P. (Die Pu)
and G.X.; data curation, Z.Q.; writing—original draft preparation, Z.Q.; writing—review and editing,
Y.Z.; visualization, Z.Q.; supervision, M.Y.; project administration, Y.Z.; funding acquisition, Y.Z. All
authors have read and agreed to the published version of this manuscript.
Funding: This work was supported by the National Natural Science Foundation of China (Program
No. 62341124), the Natural Science Basic Research Plan in Shaanxi Province of China (Program No.
2017JM6068), and the Yunnan Fundamental Research Projects (Program No. 202201AT070030).
Institutional Review Board Statement: Not applicable.
Data Availability Statement: All data are contained within the article. If you need the source code,
please contact the corresponding author to apply and it will be provided after approval.
Conflicts of Interest: The authors declare no conflicts of interest.
Appendix A
Table A1 displays the experimental data of the MKOA, CEC champion algorithms,
SOGWO [
62
], and EJAYA [
63
] for the reference of the readers. All algorithms were carried
out twenty times independently. Population size and maximum number of function
evaluations are fixed at 30 and 10,000, respectively.
Table A1. Results of MKOA, champion algorithm, SOGWO, and EJAYA in CEC2019.
MKOA KOA DE JADE SHADE LSHADE SOGWO EJAYA
min 2.49 ×1061.76 ×1082.59 ×1010 9.81 ×1083.99 ×1051.07 ×1082.44 ×1064.83 ×106
F1 avg 2.35 ×1075.51 ×1084.68 ×1010 3.07 ×1091.04 ×1067.74 ×1084.75 ×1088.63 ×107
std 2.19 ×1074.37 ×1082.23 ×1010 1.36 ×1095.48 ×1055.74 ×1081.11 ×1099.45 ×107
min 1.73 ×1011.73 ×1011.73 ×1011.73 ×1011.73 ×1011.73 ×1011.73 ×1011.73 ×101
F2 avg 1.73 ×1011.73 ×1011.73 ×1011.73 ×1011.73 ×1011.73 ×1011.73 ×1011.73 ×101
std 4.74 ×10−10 1.35 ×10−63.67 ×10−95.93 ×10−76.83 ×10−13 7.01 ×10−15 1.66 ×10−32.93 ×10−8
min 1.27 ×1011.27 ×1011.27 ×1011.27 ×1011.27 ×1011.27 ×1011.27 ×1011.27 ×101
F3 avg 1.27 ×1011.27 ×1011.27 ×1011.27 ×1011.27 ×1011.27 ×1011.27 ×1011.27 ×101
std 1.47 ×10−10 1.15 ×10−84.52 ×10−67.32 ×10−71.87 ×10−15 2.60 ×10−82.54 ×10−62.20 ×10−9
min 2.14 ×1012.52 ×1012.24 ×1012.40 ×1015.56 ×1001.36 ×1012.80 ×1012.03 ×101
F4 avg 2.92 ×1013.60 ×1013.00 ×1013.39 ×1011.03 ×1012.07 ×1016.10 ×1013.12 ×101
std 5.56 ×1006.19 ×1005.08 ×1006.46 ×1002.72 ×1003.99 ×1003.53 ×1018.60 ×100
min 1.06 ×1001.16 ×1001.16 ×1001.12 ×1001.01 ×1001.02 ×1001.21 ×1001.05 ×100
F5 avg 1.17 ×1001.33 ×1001.24 ×1001.30 ×1001.04 ×1001.18 ×1001.41 ×1001.21 ×100
std 1.05 ×10−11.37 ×10−16.25 ×10−29.00 ×10−21.67 ×10−21.03 ×10−11.81 ×10−18.85 ×10−2
min 8.78 ×1008.68 ×1008.45 ×1009.83 ×1006.71 ×1009.18 ×1001.11 ×1019.60 ×100
F6 avg 9.36 ×1009.89 ×1008.97 ×1001.07 ×1007.73 ×1001.03 ×1011.16 ×1011.07 ×101
std 3.51 ×10−18.75 ×10−13.86 ×10−15.73 ×10−15.95 ×10−15.40 ×10−13.57 ×10−17.67 ×10−1
Mathematics 2025,13, 405 28 of 30
Table A1. Cont.
MKOA KOA DE JADE SHADE LSHADE SOGWO EJAYA
min 8.41 ×1013.25 ×1021.74 ×1023.68 ×1024.23 ×1012.94 ×1021.67 ×1022.63 ×102
F7 avg 4.64 ×1025.28 ×1022.69 ×1027.59 ×1021.15 ×1024.76 ×1024.91 ×1025.41 ×102
std 2.04 ×1029.65 ×1017.62 ×1011.68 ×1027.34 ×1011.34 ×1023.15 ×1021.57 ×102
min 4.79 ×1005.47 ×1005.23 ×1004.99 ×1004.08 ×1005.04 ×1004.25 ×1005.20 ×100
F8 avg 5.68 ×1005.93 ×1005.75 ×1005.78 ×1004.73 ×1005.68 ×1005.83 ×1005.96 ×100
std 4.31 ×10−13.19 ×10−12.55 ×10−14.24 ×10−14.85 ×10−14.62 ×10−18.53 ×10−15.44 ×10−1
min 2.37 ×1002.47 ×1002.51 ×1002.37 ×1002.36 ×1002.36 ×1002.98 ×1002.50 ×100
F9 avg 2.43 ×1002.65 ×1002.60 ×1002.47 ×1002.43 ×1002.38 ×1003.81 ×1002.73 ×100
std 6.02 ×10−21.74 ×10−17.22 ×10−25.28 ×10−27.92 ×10−21.63 ×10−27.27 ×10−12.28 ×10−1
min 1.38 ×1002.03 ×1012.01 ×1012.03 ×1011.00 ×1012.02 ×1012.04 ×1012.03 ×101
F10 avg 1.84 ×1012.04 ×1012.02 ×1012.05 ×1011.91 ×1012.04 ×1012.05 ×1012.05 ×101
std 5.99 ×1008.01 ×10−25.80 ×10−21.23 ×10−13.20 ×1009.54 ×10−21.20 ×10−18.15 ×10−2
Bold is the best result of all the algorithms.
References
1.
Jia, H.; Xing, Z.; Song, W. A new hybrid seagull optimization algorithm for feature selection. IEEE Access 2019,7, 49614–49631.
[CrossRef]
2.
Zhang, Y.; Hou, X. Application of video image processing in sports action recognition based on particle swarm optimization
algorithm. Prev. Med. 2023,173, 107592. [CrossRef]
3.
Wang, Z.; Xie, H. Wireless sensor network deployment of 3D surface based on enhanced grey wolf optimizer. IEEE Access 2020,
8, 57229–57251. [CrossRef]
4.
Zhang, J.; Zhu, X.; Li, J. Intelligent Path Planning with an Improved Sparrow Search Algorithm for Workshop UAV Inspection.
Sensors 2024,24, 1104. [CrossRef] [PubMed]
5.
Li, Z.; Zhao, C.; Zhang, G.; Zhu, D.; Cui, L. Multi-strategy improved sparrow search algorithm for job shop scheduling problem.
Clust. Comput. 2024,27, 4605–4619. [CrossRef]
6.
Zelinka, I.; Snásel, V.; Abraham, A. Handbook of Optimization: From Classical to Modern Approach; Springer Science & Business
Media: Berlin/Heidelberg, Germany, 2012; Volume 38. [CrossRef].
7.
Mavrovouniotis, M.; Müller, F.M.; Yang, S. Ant colony optimization with local search for dynamic traveling salesman problems.
IEEE Trans. Cybern. 2016,47, 1743–1756. [CrossRef] [PubMed]
8.
Ghasemi, M.; Zare, M.; Trojovsk
`
y, P.; Rao, R.V.; Trojovská, E.; Kandasamy, V. Optimization based on the smart behavior of plants
with its engineering applications: Ivy algorithm. Knowl.-Based Syst. 2024,295, 111850. [CrossRef]
9. Holland, J.H. Genetic algorithms. Sci. Am. 1992,267, 66–73. [CrossRef]
10.
Zhong, W.; Liu, J.; Xue, M.; Jiao, L. A multiagent genetic algorithm for global numerical optimization. IEEE Trans. Syst. Man
Cybern. Part B (Cybern.) 2004,34, 1128–1141. [CrossRef]
11.
Akopov, A.S.; Hevencev, M.A. A multi-agent genetic algorithm for multi-objective optimization. In Proceedings of the 2013 IEEE
International Conference on Systems, Man, and Cybernetics, Manchester, UK, 13–16 October 2013; pp. 1391–1395. [CrossRef].
12.
Martínez-Álvarez, F.; Asencio-Cortés, G.; Torres, J.F.; Gutiérrez-Avilés, D.; Melgar-García, L.; Pérez-Chacón, R.; Rubio-Escudero,
C.; Riquelme, J.C.; Troncoso, A. Coronavirus optimization algorithm: A bioinspired metaheuristic based on the COVID-19
propagation model. Big Data 2020,8, 308–322. [CrossRef] [PubMed]
13.
Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob.
Optim. 1997,11, 341–359. :1008202821328[CrossRef]. [CrossRef]
14.
Tang, D.; Dong, S.; Jiang, Y.; Li, H.; Huang, Y. ITGO: Invasive tumor growth optimization algorithm. Appl. Soft Comput. 2015,
36, 670–698. [CrossRef]
15.
Gao, Y.; Zhang, J.; Wang, Y.; Wang, J.; Qin, L. Love evolution algorithm: A stimulus–value–role theory-inspired evolutionary
algorithm for global optimization. J. Supercomput. 2024,80, 12346–12407. [CrossRef]
16.
Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the Sixth International Symposium on
Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [CrossRef].
17.
Dorigo, M.; Maniezzo, V.; Colorni, A. Ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern.
Part B (Cybern.) 1996,26, 29–41. [CrossRef]
18. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014,69, 46–61. [CrossRef]
19.
Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications.
Future Gener. Comput. Syst. 2019,97, 849–872. [CrossRef]
20. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016,95, 51–67. [CrossRef]
Mathematics 2025,13, 405 29 of 30
21.
Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. 2020,
8, 22–34. [CrossRef]
22.
Zhao, S.; Zhang, T.; Ma, S.; Wang, M. Sea-horse optimizer: A novel nature-inspired meta-heuristic for global optimization
problems. Appl. Intell. 2023,53, 11833–11860. [CrossRef]
23.
Dhiman, G.; Kumar, V. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering
problems. Knowl.-Based Syst. 2019,165, 169–196. [CrossRef]
24.
Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023,
79, 7305–7336. [CrossRef]
25.
Abdel-Basset, M.; Mohamed, R.; Jameel, M.; Abouhawwash, M. Nutcracker optimizer: A novel nature-inspired metaheuristic
algorithm for global optimization and engineering design problems. Knowl.-Based Syst. 2023,262, 110248. [CrossRef]
26.
Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic.
Expert Syst. Appl. 2020,152, 113377. [CrossRef]
27.
Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design
optimization problems. Comput.-Aided Des. 2011,43, 303–315. [CrossRef]
28.
Askari, Q.; Younas, I.; Saeed, M. Political Optimizer: A novel socio-inspired meta-heuristic for global optimization. Knowl.-Based
Syst. 2020,195, 105709. [CrossRef]
29. Moghdani, R.; Salimifard, K. Volleyball premier league algorithm. Appl. Soft Comput. 2018,64, 161–185. [CrossRef]
30.
Zhang, J.; Xiao, M.; Gao, L.; Pan, Q. Queuing search algorithm: A novel metaheuristic algorithm for solving engineering
optimization problems. Appl. Math. Model. 2018,63, 464–490. [CrossRef]
31.
Kirkpatrick, S.; Gelatt, C.D., Jr.; Vecchi, M.P. Optimization by simulated annealing. Science 1983,220, 671–680. [CrossRef]
[PubMed]
32.
Azizi, M.; Aickelin, U.; A. Khorshidi, H.; Baghalzadeh Shishehgarkhaneh, M. Energy valley optimizer: A novel metaheuristic
algorithm for global and engineering optimization. Sci. Rep. 2023,13, 226. [CrossRef] [PubMed]
33.
Abdel-Basset, M.; Mohamed, R.; Sallam, K.M.; Chakrabortty, R.K. Light spectrum optimizer: A novel physics-inspired meta-
heuristic optimization algorithm. Mathematics 2022,10, 3466. [CrossRef]
34.
Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009,179, 2232–2248. [CrossRef]
35.
Formato, R. Central force optimization: A new metaheuristic with applications in applied electromagnetics. Prog. Electromagn.
Res. 2007,77, 425–491. [CrossRef]
36. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016,96, 120–133. [CrossRef]
37.
Abdel-Basset, M.; Mohamed, R.; Azeem, S.A.A.; Jameel, M.; Abouhawwash, M. Kepler optimization algorithm: A new
metaheuristic algorithm inspired by Kepler’s laws of planetary motion. Knowl.-Based Syst. 2023,268, 110454. [CrossRef]
38.
Hakmi, S.H.; Shaheen, A.M.; Alnami, H.; Moustafa, G.; Ginidi, A. Kepler algorithm for large-scale systems of economic dispatch
with heat optimization. Biomimetics 2023,8, 608. [CrossRef] [PubMed]
39.
Houssein, E.H.; Abdalkarim, N.; Samee, N.A.; Alabdulhafith, M.; Mohamed, E. Improved Kepler Optimization Algorithm for
enhanced feature selection in liver disease classification. Knowl.-Based Syst. 2024,297, 111960. [CrossRef]
40.
Abdel-Basset, M.; Mohamed, R.; Alrashdi, I.; Sallam, K.M.; Hameed, I.A. CNN-IKOA: Convolutional neural network with
improved Kepler optimization algorithm for image segmentation: Experimental validation and numerical exploration. J. Big
Data 2024,11, 13. [CrossRef]
41.
Hakmi, S.H.; Alnami, H.; Ginidi, A.; Shaheen, A.; Alghamdi, T.A. A Fractional Order-Kepler Optimization Algorithm (FO-KOA)
for single and double-diode parameters PV cell extraction. Heliyon 2024,10, e35771. [CrossRef]
42.
Mohamed, R.; Abdel-Basset, M.; Sallam, K.M.; Hezam, I.M.; Alshamrani, A.M.; Hameed, I.A. Novel hybrid kepler optimization
algorithm for parameter estimation of photovoltaic modules. Sci. Rep. 2024,14, 3453. [CrossRef] [PubMed]
43. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997,1, 67–82. [CrossRef]
44.
Zhang, R.; Zhu, Y. Predicting the mechanical properties of heat-treated woods using optimization-algorithm-based BPNN. Forests
2023,14, 935. [CrossRef]
45.
Keng, H.L.; Yuan, W. Applications of Number Theory to Numerical Analysis; Springer: Berlin/Heidelberg, Germany, 1981. [CrossRef].
46.
Zhang, D.; Zhao, Y.; Ding, J.; Wang, Z.; Xu, J. Multi-Strategy Fusion Improved Adaptive Hunger Games Search. IEEE Access 2023,
11, 67400–67410. [CrossRef]
47.
Tizhoosh, H.R. Opposition-based learning: A new scheme for machine intelligence. In Proceedings of the International conference
on computational intelligence for modelling, control and automation and international conference on intelligent agents, web
technologies and internet commerce (CIMCA-IAWTIC’06), Vienna, Austria, 28–30 November 2005; Volume 1, pp. 695–701.
[CrossRef].
48.
Mohapatra, S.; Mohapatra, P. Fast random opposition-based learning Golden Jackal Optimization algorithm. Knowl.-Based Syst.
2023,275, 110679. [CrossRef]
Mathematics 2025,13, 405 30 of 30
49.
Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M. Opposition-based differential evolution. IEEE Trans. Evol. Comput. 2008,12, 64–79.
[CrossRef]
50.
Ye, M.; Zhou, H.; Yang, H.; Hu, B.; Wang, X. Multi-strategy improved dung beetle optimization algorithm and its applications.
Biomimetics 2024,9, 291. [CrossRef] [PubMed]
51. Li, D.; Liu, C.; Gan, W. A new cognitive model: Cloud model. Int. J. Intell. Syst. 2009,24, 357–375. [CrossRef]
52.
Sadeeq, H.T.; Abdulazeez, A.M. Giant trevally optimizer (GTO): A novel metaheuristic algorithm for global optimization and
challenging engineering problems. IEEE Access 2022,10, 121615–121640. [CrossRef]
53.
Shami, T.M.; Mirjalili, S.; Al-Eryani, Y.; Daoudi, K.; Izadi, S.; Abualigah, L. Velocity pausing particle swarm optimization: A novel
variant for global optimization. Neural Comput. Appl. 2023,35, 9193–9223. [CrossRef]
54.
Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-Qaness, M.A.; Gandomi, A.H. Aquila optimizer: A novel meta-heuristic
optimization algorithm. Comput. Ind. Eng. 2021,157, 107250. [CrossRef]
55.
Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for
comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011,1, 3–18. [CrossRef]
56.
Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper optimisation algorithm: Theory and application. Adv. Eng. Softw. 2017,105, 30–47.
[CrossRef]
57.
Coello, C.A.C. Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the
state of the art. Comput. Methods Appl. Mech. Eng. 2002,191, 1245–1287. [CrossRef]
58.
dos Santos Coelho, L. Gaussian quantum-behaved particle swarm optimization approaches for constrained engineering design
problems. Expert Syst. Appl. 2010,37, 1676–1683. [CrossRef]
59. Zhou, X.; Shi, G.; Zhang, J. Improved Grey Wolf Algorithm: A Method for UAV Path Planning. Drones 2024,8, 675. [CrossRef]
60.
Wang, J.; Ju, C.; Gao, Y.; Sangaiah, A.K.; Kim, G.j. A PSO based energy efficient coverage control algorithm for wireless sensor
networks. Comput. Mater. Contin. 2018,56, 433–446. [CrossRef].
61.
Mienye, I.D.; Sun, Y. Improved Heart Disease Prediction Using Particle Swarm Optimization Based Stacked Sparse Autoencoder.
Electronics 2021,10, 2347. [CrossRef]
62.
Dhargupta, S.; Ghosh, M.; Mirjalili, S.; Sarkar, R. Selective opposition based grey wolf optimization. Expert Syst. Appl. 2020,
151, 113389. [CrossRef]
63.
Zhang, Y.; Chi, A.; Mirjalili, S. Enhanced Jaya algorithm: A simple but efficient optimization method for constrained engineering
design problems. Knowl.-Based Syst. 2021,233, 107555. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.
Available via license: CC BY 4.0
Content may be subject to copyright.