ArticlePDF Available

An Improved Arithmetic Optimization Algorithm for Numerical Optimization Problems

Authors:

Abstract and Figures

The arithmetic optimization algorithm is a recently proposed metaheuristic algorithm. In this paper, an improved arithmetic optimization algorithm (IAOA) based on the population control strategy is introduced to solve numerical optimization problems. By classifying the population and adaptively controlling the number of individuals in the subpopulation, the information of each individual can be used effectively, which speeds up the algorithm to find the optimal value, avoids falling into local optimum, and improves the accuracy of the solution. The performance of the proposed IAOA algorithm is evaluated on six systems of nonlinear equations, ten integrations, and engineering problems. The results show that the proposed algorithm outperforms other algorithms in terms of convergence speed, convergence accuracy, stability, and robustness.
Content may be subject to copyright.
Citation: Chen, M.; Zhou, Y.; Luo, Q.
An Improved Arithmetic
Optimization Algorithm for
Numerical Optimization Problems.
Mathematics 2022,10, 2152. https://
doi.org/10.3390/math10122152
Academic Editor: Frank Werner
Received: 28 May 2022
Accepted: 17 June 2022
Published: 20 June 2022
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
mathematics
Article
An Improved Arithmetic Optimization Algorithm for
Numerical Optimization Problems
Mengnan Chen 1, Yongquan Zhou 1,2,* and Qifang Luo 1,2
1College of Artificial Intelligence, Guangxi University for Nationalities, Nanning 530006, China;
2020210812000995@stu.gxmzu.edu.cn (M.C.); 20060043@gxun.edu.cn (Q.L.)
2Guangxi Key Laboratories of Hybrid Computation and IC Design Analysis, Nanning 530006, China
*Correspondence: zhouyongquan@gxun.edu.cn; Tel.: +86-136-0788-2594
Abstract:
The arithmetic optimization algorithm is a recently proposed metaheuristic algorithm. In
this paper, an improved arithmetic optimization algorithm (IAOA) based on the population control
strategy is introduced to solve numerical optimization problems. By classifying the population
and adaptively controlling the number of individuals in the subpopulation, the information of each
individual can be used effectively, which speeds up the algorithm to find the optimal value, avoids
falling into local optimum, and improves the accuracy of the solution. The performance of the
proposed IAOA algorithm is evaluated on six systems of nonlinear equations, ten integrations, and
engineering problems. The results show that the proposed algorithm outperforms other algorithms
in terms of convergence speed, convergence accuracy, stability, and robustness.
Keywords:
arithmetic optimization algorithm; population control strategy; systems of nonlinear
equations; numerical integrals; metaheuristic
MSC: 68T20
1. Introduction
In the practical application calculations of science and engineering, many mathemat-
ical problems will be involved, such as nonlinear equation systems (NESs), numerical
integration, etc. There are tremendous methods for solving NESs, including traditional
techniques and intelligent optimization algorithms. Traditional techniques to solve NESs
use gradient information [
1
], such as Newton’s method [
2
,
3
], quasi-Newton’s method [
4
],
steepest descent method, etc. Due to relying on the selection of initial points and being
prone to falling into optimal local one, these methods cannot obtain high-quality solutions
for some specific problems. The metaheuristic algorithms, however, have the characteristics
of low requirements for the initial point, a wide range of solutions, high efficiency, and
robustness. These break through the limitations of traditional methods in solving problems.
In recent years, metaheuristic algorithms have made great contributions in solving NESs
(Karr et al. [
5
]; Ouyang et al. [
6
]; Jaberipour et al. [
7
]; Pourjafari et al. [
8
]; Jia et al. [
9
];
Ren et al. [
10
]; Cai et al. [
11
]; Abdollahi et al. [
12
]; Hirsch et al. [
13
]; Sacco et al. [
14
];
Gong et al. [
15
]; Ariyaratne et al. [
16
]; Gong et al. [
17
]; Ibrahim et al. [
18
]; Liao et al. [
19
];
Ning et al. [20]; Rizk-Allah et al. [21]; Ji et al. [22]; Turgut et al. [23]).
Numerical integration is a very basic computational problem. It is well-known that,
when calculating the definite integral, the integrand is required to be easily given and
then solved by the Newton-Leibniz formula. However, this method has many limitations,
because in many practical problems, the original function of the integrand cannot be
expressed, or the calculation is too complicated, so the definite integral of the integrand
is replaced by a suitable finite sum approximation. The traditional numerical integration
methods include the trapezoidal method, rectangle method, Romberg method, Gauss
method, Simpson’s method, Newton’s method, etc. The above methods all divide the
Mathematics 2022,10, 2152. https://doi.org/10.3390/math10122152 https://www.mdpi.com/journal/mathematics
Mathematics 2022,10, 2152 2 of 27
integral interval into equal parts, and the calculation efficiency is not high. Therefore, it is
of great significance to find a new technique with a fast convergence speed, high precision,
and strong robustness for numerical integration. Zhou et al. [
24
], based on the evolutionary
strategy method, worked to solve numerical integration. Wei et al. [
25
] researched the
numerical integration method based on particle swarm optimization. Wei et al. [
26
],
based on functional networks, worked to solve numerical integration. Deng et al. [
27
]
solved the numerical integration problems based on the differential evolution algorithm.
Xiao et al. [
28
] applied the improved bat algorithm in numerical integration. The quality of
the solution obtained by the above techniques was higher than the traditional methods.
All along, engineering optimization problems have been a popular area of research.
Metaheuristic algorithms have been widely applied to engineering optimization prob-
lems due to their great practical significance, such as applied to the automatic adjust-
ment of controller coefficients (Szczepanski et al. [
29
]; Hu et al. [
30
]), applied to system
identification (Szczepanski et al. [
31
]; Liu et al. [
32
]), applied to global path planning
(Szczepanski et al. [
33
]; Brand et al. [
34
]), and applied to robotic arm scheduling (Szczepan-
ski et al. [35]; Kolakowska et al. [36]).
The Arithmetic Optimization Algorithm (AOA) [
37
] is a novel metaheuristic algo-
rithm proposed by Abualigah et al. in 2021. AOA is a mathematical model technique that
simulates the behaviors of Arithmetic operators (i.e., Multiplication, Division, Subtraction,
and Addition) and their influence on the best local solution. Some improvements and
practical applications of the algorithm have been made by scholars. Premkumar et al. [
38
]
proposed a multi-objective arithmetic optimization algorithm (MOAOA) for solving real-
world multi-objective CEC-2021-constrained optimization problems. Bansal. et al. [
39
]
used a binary arithmetic optimization algorithm for integrated features and feature selec-
tion. Agushaka et al. [
40
] introduced an advanced arithmetic optimization algorithm for
solving mechanical engineering design problems. Abualigah et al. [
41
] presented a novel
evolutionary arithmetic optimization algorithm for multilevel thresholding segmentation.
Xu et al. [
42
] hybridized an extreme learning machine and a developed version of the arith-
metic optimization algorithm for model identification of the proton exchange membrane
fuel cells. Izci et al. [
43
] introduced an improved arithmetic optimization algorithm for the
optimal design of controlled PID. Khatir et al. [
44
] proposed an improved artificial neural
network using the arithmetic optimization algorithm for damage assessments.
The basic AOA still has some drawbacks. For instance, it is easy to fall into a local
optimum due to the location update based on the optimal value, premature convergence,
and low solution accuracy, which need to be solved. Furthermore, in order to seek a more
efficient way to solve numerical problems, in this paper, an improved arithmetic opti-
mization algorithm (IAOA) based on the population control strategy is proposed to solve
numerical optimization problems. By classifying the population and adaptively controlling
the number of individuals in the subpopulation, the information of each individual can be
used effectively while increasing the population diversity. More individuals are needed in
the early iterations to perform a large-scale search that avoids falling into the local optimum.
The search around the optimal value later in the iterations by more individuals speeds up
the algorithm to find the optimal value and improves the accuracy of the solution. The
performance of the proposed IAOA algorithm is evaluated on six systems of nonlinear
equations, ten integrations, and engineering problems. The results show that the proposed
algorithm outperforms the other algorithms in terms of convergence speed, convergence
accuracy, stability, and robustness.
The main structure of this paper is as follows. Section 2reviews the relevant knowledge
for the nonlinear equation systems, integration, and basic arithmetic optimization algorithm
(AOA). Section 3introduces the proposed IAOA in detail. Section 4presents experimental
results, comparisons, and analyses. Section 5concludes the work and proposes future
research directions.
Mathematics 2022,10, 2152 3 of 27
2. Preliminaries
2.1. Nonlinear Equation Systems
Generally, a nonlinear equation system can be formulated as follows.
NES =
f1(x1,x2, . . . , xD) = 0
.
.
.
fi(x1,x2, . . . , xD) = 0
.
.
.
fn(x1,x2, . . . , xD) = 0
(1)
where xis a D-dimensional decision variable, and nis the number of equations. Some
equations are linear; the others are nonlinear. If x
*
satisfies f
i
(x
*
) = 0, then x
*
is a root of the
system of equations.
Before using the optimization algorithm to solve the NES, first is to convert it into a
single-objective optimization problem [17] as follows.
min f(x) =
n
i=1
f2
i(x),x= (x1,x2, . . . , xi, . . . , xD)(2)
Finding the minimum of an optimization problem is equivalent to finding the root of
the NES.
2.2. Numerical Integration
Definite integrals are very basic mathematical calculation problems as follows.
Zb
af(x)dx (3)
where f(x) represents the integrand function, and aand brepresent the upper and lower
bounds, respectively.
Usually, firstly, we find the original function F(x) of the integrand when finding a
definite integral and then use the Newton-Leibniz formula as follows:
Zb
af(x)dx =F(b)F(a),(F0(x) = f(x)) (4)
However, in many cases, it is difficult to obtain the original function F(x), so the
Newton-Leibniz formula will not be able to be used.
In addition, the rest of the numerical quadrature methods are based on the quadrature
formula of equidistant node division and summation or stipulate that the equidistant nodes
remain unchanged during the whole process of calculating, as shown in Figure 1a. There
need more nodes to obtain a high accuracy. However, the best segmentation is not the
predetermined equidistant points, as shown in Figure 1b. Randomly generated subintervals
has unequal intervals according to the concave and convex changes of the function curve,
so the obtained value has a higher accuracy than the traditional methods. Based on this
idea, there is another integral method based on non-equidistant point division [
24
]. First,
generate some points randomly on the integral interval, and then, the algorithm is used
to optimize these split points. Finally, a higher accuracy value will be obtained. This not
only calculates the definite integral of the function in the usual sense but also calculates
the integral of the singular function and the integral of the oscillatory function for this
method [
27
]. The flow of the numerical integration algorithm based on unequal point
segmentation is as follows [24].
(1)
Randomly initialize the population in the search space S.
Mathematics 2022,10, 2152 4 of 27
(2)
Arrange each individual in the integral interval in ascending order. The integral
interval has n(n=D+ 2) nodes and n
1 segments. Calculate the distance h
i
between
two adjacent nodes and the function f(x
k
) value of each node, then calculate the
function value corresponding to the D+ 2 nodes and the function value of the middle
node of each subsection. Find the minimum value w
j
and the maximum value
W
j
(j= 1, 2,
. . .
,D+ 1) among the function values of the left endpoint, middle node,
and right endpoint of each subsection.
(3)
Calculate fitness value. F(i)1
2D+1
j=1hjWjwj.
(4)
Update individuals through an optimization algorithm.
(5)
Repeat step 4 until reaching the stop condition.
(6)
Get the accuracy and integral values.
Figure 1.
Two methods of segmentation when solving numerical integrals: (
a
) equidistant division
and (b) equidistant division.
The numerical integration method based on Hermite interpolation only needs to
provide the value of the integral node functions and has high precision. However, this
method is based on equidistant segmentation. In this paper, the adaptability of unequal-
spaced partitioning and the numerical integration method based on Hermite interpolation
are combined to solve the numerical integration problem, and the formula is as follows:
Rb
af(x)dx =n
k=1
hi
2[f(xk) + f(xk+1)]
n1
i=1
25
144 hi[f(a)+ f(b)]
n1+
n1
i=1
hi
3[f(a+hi)+ f(bhi)]
n1
n1
i=1
hi
4[f(a+2hi)+ f(b2hi)]
n1+
n1
i=1
hi
9[f(a+3hi)+ f(b3hi)]
n1
n1
i=1
hi
48 [f(a+4hi)+ f(b4hi)]
n1
(5)
where nis the number of random split points, h
i
is the distance between two adjacent
points, and f(x) is the integrand function. The advantage of this method is that it does not
need to calculate the derivative value and only needs to provide the node function value.
Before using the optimization algorithm to solve the integration, the first step is to convert
it into a single-objective optimization problem as follows:
minF(x) = Zb
af(x)dx E
(6)
where Rb
af(x)dx is obtained by Equation (5), and Emeans the exact value.
Combine the optimization algorithm with Equation (5), and the whole solution process
is as follows.
(1)
Randomly initialize the population in the search space S.
Mathematics 2022,10, 2152 5 of 27
(2)
Arrange each individual in the integral interval in ascending order. The integral
interval has n(n=D + 2) nodes and n
1 segments. Calculate the distance h
i
between
two adjacent nodes and the function f(x
k
) value of each node and then bring them
into Equation (5).
(3)
Calculate the fitness value by Equation (6).
(4)
Update individuals through an optimization algorithm.
(5)
Repeat step 4 until reaching the stop condition.
(6)
Get the accuracy and integral values.
2.3. The Arithmetic Optimization Algorithm (AOA)
The AOA algorithm is a population-based metaheuristic algorithm to solve optimiza-
tion problems by utilizing mathematical operators (Multiplication (“
×
”), Division (“
÷
”),
Subtraction (“”), and Addition (“+”)). The specific description is as follows.
2.3.1. Initialization Phase
Generate a candidate solution matrix randomly.
X=
x1,1 ··· · ·· x1,jx1,n1x1,n
x2,1 ··· · ·· x2,jx2,n1x2,n
··· ··· · ·· · ·· ··· ···
.
.
..
.
..
.
..
.
..
.
..
.
.
xN1,1 ··· · ·· xN1,jxN1,n1xN1,n
xN,1 ··· · ·· xN,jxN,n1xN,n
(7)
After the initialization step, calculate the Math Optimizer Accelerated (MOA) function
and use it to choose between exploration and exploitation. The function is as follows:
MOA(t) = Min +t×Max Min
T(8)
where Max = 0.9 denotes the maximum and Min = 0.2 denotes the minimum of the function
value, MOA (t) represents the function value of the current iteration, and Tand trepresent
the maximum number of iterations and current iteration, respectively.
2.3.2. Exploration Phase
During the exploration phase, the operators (Multiplication (“
×
”) and Division (“
÷
”))
are used to explore the space randomly when the MOA > 0.5. The mathematical model is
as follows:
xi,j(t+1) = best(xj)÷(MOP +ε)×((UBjLBj)×µ+LBj),r2<0.5
best(xj)×MOP ×((UBjLBj)×µ+LBj),otherwise (9)
where r
2
is a random number, x
i,j
(t+ 1) represents the jth position of ith solution in the
(t+ 1)th iteration, best(x
j
) denotes the jth position in the global optimal solution,
ε
is a small
integer number that avoids the case where the denominator is zero in division, UB
j
and LB
j
represents the upper and lower bounds of each dimension, respectively, and
µ
is equal to
0.5. The Math Optimizer probability (MOP) is as follows:
MOP(t) = 1t1
α
T1
α
(10)
where MOP(t) represents the function value for the current iteration, and
α
is a sensitive
parameter and equal to 5.
Mathematics 2022,10, 2152 6 of 27
2.3.3. Exploitation Phase
During the exploration phase, the operators (Subtraction (“
”) and Addition (“+”))
are used to execute the exploitation. When MOA < 0.5, the mathematical model as follows:
xi,j(t+1) = best(xj)MOP ×((UBjLBj)×µ+LBj),r3<0.5
best(xj) + MOP ×((U BjLBj)×µ+LBj),otherwise (11)
where r
3
is a random number. The pseudo-code of the AOA is as follows (Algorithm 1) [
37
].
Algorithm 1 AOA
1. Set up the initial parameters α,µ.
2. Initialize the population randomly.
3. for t= 1: T
4. Calculate the fitness function and select the best solution.
5. Update the MOA (using Equation (8)) and MOP (using Equation (10)).
6. for i= 1: N
7. for j= 1: Dim
8. Generate the random values between [0, 1] (r1,r2,r3)
9. if r1>MOA
10. if r2> 0.5
11. Update the position of the individual by Equation (9).
12. else
13. Update the position of the individual by Equation (9).
14. end
15. else
16. if r3> 0.5
17. Update the position of the individual by Equation (11).
18. else
19. Update the position of the individual by Equation (11).
20. end
21. end
22. end
23. end
24. t=t+ 1
25. end
26. Return the best solution (x).
3. Our Proposed IAOA
3.1. Motivation for Improving the AOA
In AOA, the population is updated based on the optimal global solution. Once it falls
into the optimal local one, the entire population will stagnate. There is premature coverage,
in some cases [
33
]. In addition, this algorithm does not fully utilize the information of
the individuals in the population. Therefore, to make full use of the information of the
individuals and address the weakness of AOA, the improved arithmetic optimization
algorithm (IAOA) is proposed in this paper.
3.2. Population Control Mechanism
In the basic arithmetic optimization algorithm (AOA), the operators (Multiplication
(“
×
”), Division (“
÷
”), Subtraction (“
”), and Addition (“+”)) are used to wrap around
an optimal solution to search randomly in space, and it will lead to a loss of population
diversity. Therefore, it is necessary to classify for the population.
Mathematics 2022,10, 2152 7 of 27
3.2.1. The First Subpopulation
Sort the population according to the fitness value and select the first num_best individ-
uals as the first subpopulation:
num_best =round(0.1N+0.5N(1t/T)) (12)
where Nis the number of individuals, and tand Trepresent the current iteration and
maximum iterations, respectively. Then, these individuals update their position by getting
information about each other. The mathematical model is as follows:
xbest_i(t+1) = xbest_i(t) + rand × best(x)xbest_i(t) + xbest_j(t)
2×ω!(13)
xbest_j(t+1) = xbest_j(t) + rand × best(x)xbest_i(t) + xbest_j(t)
2×ω!(14)
where x
best_i
(t+ 1) denotes the position of ith individual in the next iteration, the same as
x
best_j
(t+ 1), best(x) represents the global optimum that has been found through individuals
after titerations, x
best_j
is selected from the first class randomly, and
ω
means the information
acquisition rate and takes the value 1 or 2.
3.2.2. The Second Subpopulation
Select num_middle individuals from the population as the second subpopulation.
num_middle =round(0.3 ×N)(15)
These individuals fall between num_best and num_worst in the population. Then, these
individuals update their position, and the updated model is as follows:
xmid_i(t+1) = xmid_i(t) + Levy ×(best(x)xmid_j)(16)
where x
mid_i
(t+ 1) denotes the position of ith individual in the next iteration, Levy is the
Levy distribution function [45,46], and xmid_j is selected from the second class randomly.
3.2.3. The Third Subpopulation
Select num_worst individuals from the population as the final subpopulation.
num_worst =N(num_best +num_middl e)(17)
In the final class, the individuals update their position by the following equation:
xworst_i(t+1) = xworst_i+t
T×best(x)xworst_j(18)
where x
worst_i
(t+ 1) denotes the position of ith individual in the next iteration, and best(x)
represents the global optimum that has been found through individuals after titerations.
At the early iteration of IAOA, there are more individuals in the first subpopulation
for speeding up the update of the global optimum. At the later iterations of the algorithm,
the number of individuals in the first subpopulation decreases, which solves the operator
crowding problem near the optimum. In addition, the number of individuals in the
third subpopulation increases, which effectively prevents the population from falling
into the local optimum. The second subpopulation utilizes the Levy flight for small-step
updates to find more promising areas. The above strategy can effectively overcome the
weaknesses of traditional AOA and improve its performance. The pseudo-code of the
IAOA in Algorithm 2 is as follows (Algorithm 2). Figure 2is the flowchart of the IAOA.
Mathematics 2022,10, 2152 8 of 27
Algorithm 2 IAOA
1. Set up the initial parameters α,µ.
2. Initialize the population randomly.
3. for t= 1: T
4. Calculate the fitness function and select the best solution.
5. Calculate the number of the first subpopulation by Equation (12).
6. Update the first subpopulation by Equations (13) and (14).
7. Calculate the number of the second subpopulation by Equation (15).
8. Update the second subpopulation by Equation (16).
9. Calculate the number of the third subpopulation by Equation (17).
10. Update the third subpopulation by Equation (18).
11. Update the MOA (using Equation (8)) and MOP (using Equation (10)).
12. for i= 1: N
13. for j= 1: Dim
14. Generate the random values between [0, 1] (r1,r2,r3)
15. if r1>MOA
16. if r2> 0.5
17. Update the position of the individual by Equation (9).
18. else
19. Update the position of the individual by Equation (9).
20. end
21. else
22. if r3> 0.5
23. Update the position of the individual by Equation (11).
24. else
25. Update the position of the individual by Equation (11).
26. end
27. end
28. end
29. end
30. t=t+ 1
31. end
32. Return the best solution (x).
Mathematics 2022,10, 2152 9 of 27
Figure 2. Flowchart of the IAOA.
4. Numerical Experiments and Analysis
4.1. Parameter Settings
Here, six groups of NESs and ten groups of integration have been used to demon-
strate the efficiency of the IAOA. The IAOA compares several popular algorithms and
two improved arithmetic optimization algorithms (The Arithmetic Optimization Algorithm
(AOA) [
37
], Sine Cosine Algorithm (SCA) [
47
], Whale Optimization Algorithm (WOA) [
48
],
Grey Wolf Optimizer (GWO) [
49
], Harris hawks optimization (HHO) [
50
], Slime mould
algorithm (SMA) [
51
], Differential evolution(DE) [
52
], Cuckoo search algorithm (CSA) [
53
],
Advanced arithmetic optimization algorithm (nAOA) [
40
], and a developed version of
Arithmetic Optimization Algorithm (dAOA) [
42
]) for tackling NES. Among them, the
parameters of these algorithms are all from the original version. These algorithms are
evaluated from four aspects: the average value, the optimal value, the worst value, and the
standard deviation. All algorithms are executed on MATLAB 2021a, running on a computer
with a Windows 10 operating system, Intel(R) Core (TM) i7-9700 CPU @ 3.00 GHz, 16 GB
of Random Access Memory (RAM), and run 30 times independently for all test problems.
The flowchart for handling issues by the IAOA is shown in Figure 3.
Mathematics 2022,10, 2152 10 of 27
Figure 3. Flowchart for handling issues.
4.2. Application in Solving NESs
Solving nonlinear problems often requires higher-precision solutions in many practical
applications. In this section, six nonlinear systems of equations are chosen to evaluate the
performance of the IAOA. The characteristics of these equations are different from each
other, where problem01 [
54
] describes the interval arithmetic problem, problem02 [
55
]
describes the multiple steady-states problem, and problem06 [56] describes the molecular
conformation. These problems come from real-world applications. For fairness, set the
population to 50 and the maximum number of iterations to 200. Tables 16show all the test
results of the NES. Best represents the best value, Worst represents the worst value, Mean
represents the mean value, Std represents the standard deviation, and p-value stands for
the Wilcoxon rank–sum test in Table 7. The Wilcoxon p-value test is used to verify whether
there is an obvious difference between the two sets of data.
Table 1. Comparison of the experimental results for problem01.
Variable Algorithms
AOA IAOA SCA WOA
x10.006361583402960 0.257838650825518 0.186732591196869 0.260832096649832
x20.005731653837062 0.381098185347242 0.399818814038728 0.381680691118263
x30.010586282003880 0.278742562628776 0.008959145137085 0.258353295805450
x40.002593989505334 0.200665586275865 0.227237103605413 0.215307146397956
x50.033520558095432 0.445255928027431 0.003829239926320 0.448797960971748
x60.076424218265631 0.149188813621332 0.185905381801968 0.147397359179682
x70.038862694473151 0.432010769672038 0.368813050526818 0.442390776062597
x80.000004007877210 0.073406152818720 0.037739989370997 0.137586270569043
x90.029054432130685 0.345966262513093 0.206476235144125 0.342058064566263
x10 0.013690425703394 0.427324518269459 0.363350844915327 0.401475021739693
f
8.45665838921712
×
10
14.73405913551646 ×1010
1.22078391539763
×
10
1
9.59544885085295
×
10
4
Mathematics 2022,10, 2152 11 of 27
Table 1. Cont.
Variable Algorithms
GWO HHO DE CSO
x10.256851024248810 0.324317023967532 2.000000000000000 0.089951372914250
x20.383565743620699 0.303967192642514 1.948157453190990 0.309487131659014
x30.278312335483674 0.216191961411362 2.000000000000000 0.456410156556233
x40.198737300040942 0.305260974230829 1.815308511546580 0.356392775439902
x50.446311619177502 0.325255783591842 2.000000000000000 0.476086684751138
x60.145894138632280 0.223020351676054 2.000000000000000 0.078921332097133
x70.145894138632280 0.323185143014029 2.000000000000000 0.499580490394335
x80.007832029555062 0.327973609353822 1.915762141824520 0.197756675883883
x90.343654620394334 0.333430854648433 2.000000000000000 0.228228833675487
x10 0.425902664080806 0.324142888370713 2.000000000000000 0.470195948900759
f
1.25544451911646
×
10
37.79220329211044 ×102
7.96261500819178
×
10
2
6.61705221934444
×
10
2
Variable Algorithms
SMA nAOA dAOA
x10.249900132290417 0.035430633051580 1.840704485033870
x20.375428314977531 0.053983062784772 1.213421005935260
x30.272448580296318 0.072735305166021 1.203555993641700
x40.199698265955405 0.021399042985613 0.393935624266822
x50.425934189445810 0.064655913970964 0.249476549706985
x60.057699959645613 0.012570281350831 0.459915310960444
x70.431865275874618 0.057639809639213 0.675754718182326
x80.015005640000641 0.005520004765830 0.895856414267328
x90.347986992756388 0.041229484511092 0.359139808282465
x10 0.415304164782275 0.079595719921909 1.529188120361250
f
4.47411205566240
×
10
36.74563715208325 ×1011.91503507134915
Table 2. Comparison of the experimental results for problem02.
Variable Algorithms
AOA IAOA SCA WOA
x10.040781958181860 0.042124781715274 0.000000000000000 0.041561373108785
x20.268625655728691 0.061754610138946 0.266593748985495 0.268697327813652
f
2.01752031872803
×
10
79.24446373305873 ×1034 8.82826387279195 ×105
6.92247231102962
×
10
9
Variable Algorithms
GWO HHO DE CSO
x10.265622854930434 0.267855297066815 0.266589101862370 0.266620164671422
x20.178718146817611 0.458749279058429 0.327275026016101 0.178514261126008
f
1.13985864694418
×
10
76.55986405733090 ×1081.31654979128584 ×1018
1.49504500886345
×
10
9
Variable Algorithms
SMA nAOA dAOA
x10.021419624272050 0.000000000000000 0.236558250181286
x20.048075232460874 0.719124811309122 0.508933311549167
f
2.89316821274146
×
10
53.07109081317222 ×105
3.22387407689191
×
10
4
Mathematics 2022,10, 2152 12 of 27
Table 3. Comparison of the experimental results for problem03.
Variable Algorithms
AOA IAOA SCA WOA
x11.990744078311880 0.947268146986263 0.225974226141413 1.424482905343090
x20.220001522814532 0.785020015568289 1.245763361231140 0.543544840817441
f
5.61739095968327
×
10
34.02151576372412 ×1032 7.95691890654021 ×104
1.06331568826728
×
10
3
Variable Algorithms
GWO HHO DE CSO
x11.794053112053940 1.495480498807310 1.791308474954350 0.212779003619775
x20.303905803005920 0.420394691864127 0.301889327351144 1.257141525856050
f
2.77808608355359
×
10
56.12298193031725 ×1051.84881969881973 ×109
6.26348225916795
×
10
7
Variable Algorithms
SMA nAOA dAOA
x11.791387180972800 1.475077261850100 1.580085715978880
x20.302157020359872 0.454673564762598 0.4651484d76848022
f
5.47910691165820
×
10
82.17709293383390 ×104
5.12705019470938
×
10
2
Table 4. Comparison of the experimental results for problem04.
Variable Algorithms
AOA IAOA SCA WOA
x10.000266868453558 0.000000091835793 0.120898772911816 0.310246574315981
x20.000267036157051 0.000013971597535 0.491167568359585 0.467564824328878
x30.000267036274281 0.000030454051416 10.000000000000000 1.071469773086650
x40.000000025430197 0.000010000404353 0.178108600809833 0.404219784214681
x50.000267039311495 0.000011275918099 5.423242568753400 3.552125620609660
x60.000267036127224 0.000000019800029 0.049710980654501 1.834136698070800
x70.000000000091855 0.000000000138437 0.445662462511328 0.286050311387620
x80.000267036101457 0.000000454282127 10.000000000000000 2.931846497771810
x90.000267033832224 0.000000000736505 0.144419405019169 4.812450845354100
x10 0.000267043884482 0.000002006069864 0.518105971932846 3.756426716000660
f
1.08498006397337
×
10
97.03339003909689 ×1016
4.13237426374674
×
10
1
6.47066501369328
×
10
1
Variable Algorithms
GWO HHO DE CSO
x10.044653752694561 0.000047703379713 0.160723693838569 0.009650846541198
x20.259567674882923 0.000075691075249 0.431923139718368 0.147278561202585
x31.777013199398760 0.000029713372367 0.072922517980119 3.148557575646470
x40.042606334458592 0.000050184914825 0.447403957744849 0.512428980703464
x54.935286036663600 0.000033675529531 0.197972459731190 4.175819684412100
x68.146156623785810 0.000067989452634 1.490110445009050 7.123183974281880
x70.108125274969201 0.000031288762826 0.472265426079125 1.268663892956760
x81.747052457418910 0.000048491290536 0.509493705510866 3.198230908839320
x90.311997778279745 0.000063892452193 1.142101578993260 4.763105818868310
x10 8.430357427064680 0.000123055431652 2.110335475212350 9.463108408596410
f
7.56734706927375
×
10
36.11971561041781 ×1010
9.87501536049260
×
10
12.18295386757873
Mathematics 2022,10, 2152 13 of 27
Table 4. Cont.
Variable Algorithms
SMA nAOA dAOA
x10.000000000028677 0.000020144848903 0.934997016811202
x20.000014644312649 0.000060200695401 1.295640443505010
x30.000038790339140 0.000020118018817 5.634966911723890
x40.000000000221797 0.000060200956330 4.825343892476190
x50.000000055701981 0.000020122803817 0.269511140973028
x60.000000030051237 0.000020134693956 7.253398121182340
x70.000000595936232 0.000020123341500 7.557747336452660
x80.000000000025333 0.000020925519435 5.520361069927860
x90.000000799504725 0.000043615727680 4.709534880735350
x10 0.000000000012983 0.000020120622373 8.954470788407880
f1.30095438660555 ×1010 1.50696700666871 ×1092.07190542503982 ×102
Table 5. Comparison of the experimental results for problem05.
Variable Algorithms
AOA IAOA SCA WOA
x10.371964486871792 0.500000000000000 0.471178994397267 0.503978268408352
x22.990337880814430 3.141592653589790 3.118271172186020 3.142976305563530
f1.89048835343036 ×1041.85873810048745 ×1028 3.41504906318340 ×1052.00099014478417 ×107
Variable Algorithms
GWO HHO DE CSO
x10.495722089382004 0.503332577729795 0.299448692445072 0.500482294032500
x23.143566564341090 3.142753305279310 2.836927770362990 3.142098043614560
f1.12835512797232 ×1061.16071617155615 ×1076.25300383824133 ×1023 2.13609775136897 ×108
Variable Algorithms
SMA nAOA dAOA
x10.298949061647857 0.354640044143990 2.956994389007600
x22.835691250750600 2.956994389007600 1.890717921128260
f1.05189651760469 ×1081.59376404093113 ×1043.65946616757579 ×103
Table 6. Comparison of the experimental results for problem06.
Variable Algorithms
AOA IAOA SCA WOA
x10.953663829653960 0.779548045079158 11.147659127176500 1.516510183032980
x20.663112382731748 0.779548045079158 0.900762400732728 0.694394649388567
x30.729782844271910 0.779548045079158 0.919816117314499 10.556407054559600
f3.35330112498813 ×1011.00553388370096 ×1020 2.75666643131973 8.65817545834561
Variable Algorithms
GWO HHO DE CSO
x10.781303537791760 0.782460718139219 0.779277448448367 0.765447632695953
x20.777872878718449 0.789339702437282 0.779700789186745 0.784775197498564
x30.779780469890485 0.766810453292313 0.780020611467694 0.735052686517780
f5.49159538279891 ×1041.00882211687459 ×1026.71295836563811 ×1062.92512803990831 ×101
Variable Algorithms
SMA nAOA dAOA
x10.779731780102931 0.437772635064718 1.056395480177350
x20.779371556451744 7.659741643877890 6.893981344148980
x30.779303513685515 2.620897335617900 1.876924860155790
f1.03517116885362 ×1051.49720612584788 2.61017698945353 ×104
Mathematics 2022,10, 2152 14 of 27
Table 7. Statistical results for the NES.
Algorithms Systems of Nonlinear Equations
problem01 problem02 problem03 problem04 problem05 problem06
AOA best 7.02711 ×1011.20198 ×1088.30574 ×1012 2.99534 ×1010 5.32587 ×1061.60969 ×108
worst 9.05980 ×1017.47231 ×1079.55457 ×1033.58264 ×1095.96026 ×1041.00599 ×10
mean 8.45666 ×1012.01752 ×1073.18486 ×1041.08498 ×1091.89049 ×1043.35330 ×101
std 4.40686 ×1021.78065 ×1071.74442 ×1038.49280 ×1010 1.40374 ×1041.83668
p-value 3.01986 ×1011 1.01490 ×1011 1.07516 ×1011 3.01986 ×1011 1.49399 ×1011 3.01230 ×1011
IAOA best 1.05462 ×1010 0.00000 4.93038 ×1032 2.97972 ×1019 0.00000 1.81191 ×1030
worst 1.25230 ×1093.08149 ×1033 2.09541 ×1031 5.52546 ×1015 5.57614 ×1027 2.98754 ×1019
mean 4.73406 ×1010 9.24446 ×1034 7.27231 ×1032 7.03339 ×1016 1.85874 ×1028 1.00553 ×1020
std 2.84371 ×1010 1.43626 ×1033 4.02152 ×1032 1.22291 ×1015 1.01806 ×1027 5.45273 ×1020
SCA best 4.64629 ×1021.20156 ×1088.29788 ×1067.08592 ×1047.53679 ×1091.19890 ×101
worst 2.98744 ×1018.60445 ×1043.13588 ×1032.83503 2.00649 ×1043.29896 ×10
mean 1.22078 ×1018.82826 ×1055.47683 ×1044.13237 ×1013.41505 ×1052.75667
std 5.72692 ×1022.61875 ×1047.59630 ×1046.58494 ×1014.69615 ×1056.25475
p-value 3.01986 ×1011 1.01490 ×1011 1.07516 ×1011 3.01986 ×1011 1.49399 ×1011 3.01230 ×1011
WOA best 1.87873 ×1046.72146 ×1014 6.18945 ×1013 4.04945 ×1062.16928 ×1011 1.76476 ×105
worst 5.56233 ×1031.30541 ×1074.48907 ×1024.99725 4.78904 ×1067.91148 ×10
mean 9.59545 ×1046.92247 ×1094.26773 ×1036.47067 ×1012.00099 ×1078.65818
std 1.06419 ×1032.49080 ×1081.24385 ×1021.07197 8.71177 ×1072.24136 ×10
p-value 3.01986 ×1011 1.01490 ×1011 1.07516 ×1011 3.01986 ×1011 1.49399 ×1011 3.01230 ×1011
GWO best 2.65480 ×1062.31886 ×1012 1.77817 ×1081.01688 ×1062.21126 ×1099.05730 ×105
worst 6.59898 ×1031.73256 ×1069.94266 ×1025.57604 ×1021.70979 ×1051.58625 ×103
mean 1.25544 ×1031.13986 ×1073.33932 ×1037.56735 ×1031.12836 ×1065.49160 ×104
std 2.25868 ×1034.16137 ×1071.81481 ×1021.36923 ×1023.33417 ×1063.69947 ×104
p-value 3.01986 ×1011 1.01490 ×1011 1.07516 ×1011 3.01986 ×1011 1.49399 ×1011 3.01230 ×1011
HHO best 2.03768 ×1028.99794 ×1031 4.93038 ×1032 1.21192 ×1011 7.70372 ×1034 3.83242 ×105
worst 1.33302 ×1011.91904 ×1065.78702 ×1041.00491 ×1093.34700 ×1067.08247 ×102
mean 7.79220 ×1026.55986 ×1084.12782 ×1056.11972 ×1010 1.16072 ×1071.00882 ×102
std 2.90524 ×1023.50117 ×1071.19896 ×1042.78236 ×1010 6.10656 ×1071.45023 ×102
p-value 3.01986 ×1011 1.01490 ×1011 5.56066 ×1083.01986 ×1011 1.30542 ×1010 3.01230 ×1011
DE best 6.05782 ×1038.15969 ×1028 2.49399 ×1020 2.59514 ×1012.59615 ×1031 4.23182 ×1011
worst 9.69921 ×1011.19322 ×1017 5.91181 ×1072.58615 6.37964 ×1022 1.17012 ×104
mean 7.96262 ×1021.31655 ×1018 3.33313 ×1089.87502 ×1016.25300 ×1023 6.71296 ×106
std 2.40157 ×1012.91169 ×1018 1.26981 ×1076.21653 ×1011.66035 ×1022 2.15862 ×105
p-value 3.01986 ×1011 1.01490 ×1011 1.07516 ×1011 3.01986 ×1011 6.22236 ×1011 3.01230 ×1011
CSO best 2.82411 ×1027.30711 ×1011 2.92752 ×1096.03864 ×1012.67109 ×1010 2.27267 ×102
worst 1.34962 ×1017.15408 ×1092.57784 ×1064.34942 1.32416 ×1071.31894
mean 6.61705 ×1021.49505 ×1096.53698 ×1072.18295 2.13610 ×1082.92513 ×101
std 2.71383 ×1021.66707 ×1095.69101 ×1071.05318 3.36401 ×1083.41112 ×101
p-value 3.01986 ×1011 1.01490 ×1011 1.07516 ×1011 3.01986 ×1011 1.49399 ×1011 3.01230 ×1011
SMA best 5.18988 ×1041.26496 ×1072.37253 ×1011 2.08208 ×1011 6.22359 ×1011 3.95601 ×107
worst 1.17331 ×1022.46549 ×1045.80093 ×1072.89907 ×1010 5.94920 ×1084.75099 ×105
mean 4.47411 ×1032.89317 ×1055.98652 ×1081.30095 ×1010 1.05190 ×1081.03517 ×105
std 3.00476 ×1035.64857 ×1051.28713 ×1077.25135 ×1011 1.30068 ×1081.04158 ×105
p-value 3.01986 ×1011 1.01490 ×1011 1.07516 ×1011 3.01986 ×1011 1.49399 ×1011 3.01230 ×1011
nAOA best 4.73537 ×1011.16733 ×1093.11364 ×1012 3.28064 ×1010 2.13953 ×1057.56334 ×108
worst 7.39125 ×1019.06936 ×1048.22290 ×1012.69391 ×1094.30978 ×1044.49162 ×10
mean 6.74564 ×1013.07109 ×1052.77064 ×1021.50697 ×1091.59376 ×1041.49721
std 5.68300 ×1021.65502 ×1041.50077 ×1016.31248 ×1010 7.06193 ×1058.20053
p-value 3.01986 ×1011 1.01490 ×1011 1.07516 ×1011 3.01986 ×1011 1.49399 ×1011 3.01230 ×1011
dAOA best 2.01052 ×1018.99368 ×1092.54429 ×1043.09426 ×1010 5.69606 ×1068.50407 ×104
worst 6.87872 1.28121 ×1034.68145 ×1019.87499 ×1021.56431 ×1023.78263 ×105
mean 1.91504 3.22387 ×1046.56368 ×1022.07191 ×1023.65947 ×1032.61018 ×104
std 2.16147 3.20053 ×1041.21675 ×1012.92259 ×1025.26309 ×1038.07193 ×104
p-value 3.01986 ×1011 1.01490 ×1011 1.07516 ×1011 3.01986 ×1011 1.49399 ×1011 3.01230 ×1011
Mathematics 2022,10, 2152 15 of 27
Problem 01. The description of the system is as follows [54]:
x10.25428722 0.18324757x4x3x9=0
x20.37842197 0.16275449x1x10 x6=0
x30.27162577 0.16955071x1x2x10 =0
x40.19807914 0.15585316x7x1x6=0
x50.44166728 0.19950920x7x6x3=0
x60.14654113 0.18922793x8x5x10 =0
x70.42937161 0.21180486x2x5x8=0
x80.07056438 0.17081208x1x7x6=0
x90.34504906 0.19612740x10 x6x8=0
x10 0.42651102 0.21466544x4x8x1=0
(19)
There are ten equations in the system, where
xi[
2, 2
]
,i= 1,
. . .
,n, and n= 10.
The aim was to obtain a higher precision solution x(x
1
,
. . .
,x
n
) through the proposed
optimization method, and the results are recorded in Table 1. The IAOA is better than
others compared with several algorithms. The WOA ranks second, and the rest obtain
competitive results. The convergence curve for this problem shows in Figure 4a.
Figure 4. Cont.
Mathematics 2022,10, 2152 16 of 27
Figure 4. Convergence curve for tackling the NES (problem01–06 (af)).
Problem 02. The description of the system is as follows [55]:
(1R)D
10(1+β1)x1·exp10x1
1+10x1
γx1=0
(1R)D
10 β1x1(1+β2)x2·exp10x2
1+10x2
γ+x1(1+β2)x2=0
(20)
There are two equations in system, where
xi[
0, 1
]
,i= 1,
. . .
,n, and n= 2. In Table 2,
the experimental results for this problem proved that the proposed IAOA outperforms the
other methods. The DE ranks second, and the rest obtain competitive results. The AOA,
WOA, GWO, HHO, and CSO are in the third echelon. Furthermore, the rest are in the
fourth echelon. The convergence curve for this problem is shown in Figure 4b.
Problem 03. The description of the system is as follows [13]:
sinx3
13x1x2
21=0
cos3x2
1x2x3
2+1=0
(21)
There are two equations in the system, where
xi[
2, 2
]
,i= 1,
. . .
,n, and n= 2. The
simulation results for this problem are shown in Table 3. It revealed that the IAOA is better
than the other algorithms. The DE, CSO, and SMA are in the second echelon. The rest are
in the third echelon. The convergence curve for this problem is shown in Figure 4c.
Problem 04. The description of the system is as follows [54]:
x2+2x6+x9+2x10 105=0
x3+x83·105=0
x1+x3+2x5+2x8+x9+x10 5·105=0
x4+2x7105=0
0.5140437 ·107x5x2
1=0
0.1006932 ·106x62x2
2=0
0.7816278 ·1015x7x2
4=0
0.1496236 ·106x8x1x3=0
0.6194411 ·107x9x1x2=0
0.2089296 ·1014x10 x1x2
2=0
(22)
There are ten equations in the system:
xi[
10, 10
]
,i= 1,
. . .
,n, and n= 10. Table 4
shows that the IAOA outperforms the others, and AOA, HHO, SMA, and nAOA obtain
the competitive results. The convergence curve for this problem is shown in Figure 4d.
Mathematics 2022,10, 2152 17 of 27
Problem 05. The description of the system is as follows [17]:
0.5 sin(x1x2)0.25
πx20.5x1=0
10.25
π[exp(2x1)e]+e
πx22ex1=0
(23)
There are two equations in the system, where
x1[
0.25, 1
]
and
x2[
1.5, 2
π]
. In
Table 5, the IAOA obtained the optimal solution, DE obtained the suboptimal solution,
and the rest of the algorithms obtained competitive results. The convergence curve for this
problem is shown in Figure 4e.
Problem 06. The description of the system is as follows [56]:
β11 +β12x2
2+β13x2
3+β14x2x3+β15x2
2x2
3=0
β21 +β22x2
3+β23x2
1+β24x3x1+β25x2
3x2
1=0
β31 +β32x2
1+β33x2
2+β34x1x2+β35x2
1x2
2=0
(24)
There are three equations in the system, where the details about
βij
can be found in
the literature [
56
]:
xi[
20, 20
]
,i= 1,
. . .
,n, and n= 3. In Table 6, the proposed IAOA
outperforms the other algorithms; the GWO, SMA, and DE get competitive results. The
convergence curve for this problem is shown in Figure 4f.
The statistical results show that the IAOA outperforms all algorithms on the remaining
problems in Table 7. These demonstrate that the IAOA has stronger ability and higher
stability than the other methods when solving a nonlinear system of equations. In Figure 4,
IAOA’s convergence speed is slower than the others before the 110th iteration, but after
that, the IAOA still maintains a high convergence speed and achieves the optimum at the
200th iteration for problem01; for problem02 and problem03, the IAOA has the fastest speed
throughout the whole process and reaches the optimum at the 120th iteration and before
120 iterations, respectively; for problem04, the IAOA is slower than the other algorithms
before 70 iterations; however it continues to converge after that and obtains the optimal
value after 200 iterations; for problem05, there is a close convergence rate for the IAOA and
DE, but a better value is obtained by the IAOA; for problem06, it has a slower convergence
speed than the others before 20 iterations, but after that, the fastest convergence rate is
obtained by the IAOA. All the experimental results prove that the algorithm proposed in
this paper has the characteristics that include a fast convergence speed, high convergence
accuracy, high solution quality, good stability, and strong robustness when dealing with
nonlinear systems of equations. The p-values of almost all test functions in the table are
less than 0.05, indicating that the IAOA is significantly different from the other algorithms.
4.3. Numerical Integration
The performance of the proposed new method is evaluated in this section using
the ten numerical integration problems in Table 8, where F08 is a singular integral and
F10 is an oscillatory integral. The IAOA compared with the traditional methods and
population-based algorithms in tackling these cases. Tables 912 show the best integral
values obtained by solving ten problems in 30 independent runs, where the R-method,
T-method, S-method, H-method, G32, and 2n
×
L5 represent the traditional methods
(rectangle method, trapezoid method, Simpson method, Hermite interpolation method,
the 32-point Gaussian formula, and the 5-point Gauss-Roberto-Legendre formula). The
rest are swarm intelligence algorithms applied to solve numerical integration problems
(evolutionary strategy method [
24
], particle swarm optimization [
25
], differential evolution
algorithm [
27
], and improved bat algorithm [
28
]). The population size and the maximum
number of iterations are set to 30 and 200 during the process, respectively. In Table 9, for
F01, the solution accuracy of the IAOA is higher than the other methods, and then, the
S-method, FN, ES, DEBA, PSO, and DE obtain close results; for F02, the IAOA achieves
the best result, and the FN, ES, DEBA, PSO, and DE are in the second echelon; for F03, the
Mathematics 2022,10, 2152 18 of 27
IAOA achieves the better result compared to the FN, ES, and PSO. The MBFES, DEBA, and
DE rank third. In Table 10, for F04, the IAOA gets a perfect result, and the FN, ES, DEBA,
PSO, and DE obtain similar values; for F05, the IAOA ranks first, and the FN, ES, DEBA,
PSO, and DE rank second; for F06, the IAOA, FN, and DE achieve competitive results.
For F07–F09, the IAOA obtains the best value, and the FN, ES, and DEBA rank second
in Table 11. The traditional methods (R-method, T-method, and S-method) fail to solve
F10; therefore, G32 and 2n
×
L5 are utilized to tackle this problem. In Table 12, the IAOA
and DEBA obtain similar values and ranks first. Tables 13 and 14 are statistical results for
the numerical integration (F01–F10) are obtained by swarm intelligence algorithms. For
F01–F09, the IAOA is better than the other algorithms across all the assessment criteria
(the best value, the worst value, mean value, and standard deviation). However, for F10,
the IAOA achieves the only optimal result in the best value, and the rest rank second, in
which the DEBA obtains the best results. From Figure 5, the method proposed in this paper
has the fastest convergence speed and convergence accuracy for all the problems except
F10. The above experimental results prove that the IAOA has fast convergence speed, high
solution accuracy, and strong robustness. These enable the IAOA to handle numerical
integration problems; therefore, it is a worthwhile direction to apply the IAOA to solve the
integration solution problems in practical engineering applications.
Table 8. Details of the integrations F01–F10.
Integrations Details Range
F01 f(x) = x2[0, 2]
F02 f(x) = x4[0, 2]
F03 f(x) = 1+x2[0, 2]
F04 f(x) = 1
1+x[0, 2]
F05 f(x) = sin x[0, 2]
F06 f(x) = ex[0, 2]
F07 f(x) = q1+ (cos x)2[0, 48]
F08 f(x) =
ex, 0 x<1
ex/2, 1 x<2
ex/3, 2 x3
[0, 3]
F09 f(x) = ex2[0, 1]
F10 f(x) = xcos xsin xmx,(m=10, 20, 30)[0, 2π]
Table 9. Comparison of the experimental results for F01–F03.
Methods Integrations
F01 F02 F03
R-method 2.000 2.000 2.828
T-method 4.000 16.000 3.236
S-method 2.667 6.667 2.964
H-method 2.830 7.066 3.048
FN [26] 2.667 6.3995 2.95789
MBFES [24] 2.659 6.338 2.956
ES [24] 2.666 6.398 2.9577
DEBA [28] 2.66698573 6.401201 2.958169
PSO [25] 2.666 6.398 2.9578
DE [27] 2.667 6.3995 2.958
AOA 2.61006134 6.20147125 2.94004382
IAOA 2.66661710 6.40000000 2.95788286
Exact 2.66666667 6.40000000 2.95788572
Mathematics 2022,10, 2152 19 of 27
Table 10. Comparison of the experimental results for F04–F06.
Methods Integrations
F04 F05 F06
R-method 1.000 1.683 5.437
T-method 1.333 0.909 8.389
S-method 1.111 1.425 6.421
H-method 1.112 1.452 6.691
FN [26] 1.0986 1.416 6.389
MBFES [24] 1.090 1.419 6.390
ES [24] 1.098 1.416 6.388
DEBA [28] 1.098754 1.416082 6.388921
PSO [25] 1.0985 1.416 6.3887
DE [27] 1.099 1.416 6.389
AOA 1.08923818 1.40101546 6.29531692
IAOA 1.09861229 1.41613957 6.38901606
Exact 1.09861229 1.41614684 6.38905610
Table 11. Comparison of the experimental results for F07–F09.
Methods Integrations
F07 F08 F09
R-method 52.13975183 1.51349542 0.77782078
T-method 62.43737140 1.61179305 0.74621972
S-method 117.61490334 2.48720505 0.74683657
H-method 58.99776108 1.56164258 0.75403569
FN [26] 58.4705 1.54604 0.746823
MBFES [24] 58.48828 1.5455 0.74652
ES [24] 58.47065 1.5459805 0.74683
DEBA [28] 58.470505372351 1.5460388345767 0.7468269544604
PSO 56.80139775 1.52897330 0.74328459
DE 56.04598085 1.52425900 0.74202909
AOA 56.17497970 1.52641514 0.74223182
IAOA 58.47046915 1.54603603 0.74682413
Exact 58.47046915 1.54603603 0.74682413
Table 12. Comparison of the experimental results for F10.
Methods Integrations
F10 (m = 10) F10 (m = 20) F10 (m = 30)
G32 0.6340207 1.2092524 1.5822272
2n ×L5 0.55875940 0.27789620 0.18508448
H-method 0.21043575 0.17309499 0.02945756
MBFES [24]0.68134052 0.37280425 0.17305621
ES [24]0.65034080 0.30583435 0.23556815
DEBA 0.63466518 0.31494663 0.20967248
PSO 1.50150183 1.33949737 1.10170197
DE [27]0.63982173 0.31035906 0.21438251
AOA 3.07253909 0.56489050 0.42642997
IAOA 0.63466518 0.31494663 0.20967248
Exact 0.63466518 0.31494663 0.20967248
Mathematics 2022,10, 2152 20 of 27
Table 13. Statistical results for the numerical integrations (F01–F06).
Algorithms Integrations
F01 F02 F03 F04 F05 F06
AOA best 5.660532 ×1021.985287 ×1011.784189 ×1029.374106 ×1031.513137 ×1029.373918 ×102
worst 6.785842 ×1022.466178 ×1012.112411 ×1021.103594 ×1021.827849 ×1021.105054 ×101
mean 6.196485 ×1022.238141 ×1011.970905 ×1021.041648 ×1021.679104 ×1021.013200 ×101
std 2.473863 ×1031.277362 ×1026.790772 ×1044.381854 ×1047.886715 ×1043.985235 ×103
IAOA best 4.956295 ×1050.000000 2.855397 ×1060.000000 7.267277 ×1064.004088 ×105
worst 1.070986 ×1049.632589 ×1061.471988 ×1057.241931 ×1063.035345 ×1051.136393 ×104
mean 7.267766 ×1059.617999 ×1076.357033 ×1061.274560 ×1061.595556 ×1057.989662 ×105
std 1.561025 ×1052.672207 ×1062.828416 ×1061.942626 ×1065.989208 ×1062.032255 ×105
PSO [25] best 3.966996 ×1021.282142 ×1011.263049 ×1026.772669 ×1031.115352 ×1026.495427 ×102
worst 5.467546 ×1021.880821 ×1011.614274 ×1029.112184 ×1031.385859 ×1029.718717 ×102
mean 4.406724 ×1021.593799 ×1011.405265 ×1027.745239 ×1031.208230 ×1027.327404 ×102
std 3.262431 ×1031.528260 ×1029.707823 ×1046.532329 ×1047.146743 ×1046.698801 ×103
DE [27] best 5.444535 ×1021.776272 ×1011.740389 ×1029.410606 ×1031.537737 ×1029.229490 ×102
worst 6.223208 ×1021.992612 ×1011.943564 ×1021.043440 ×1021.668422 ×1021.003285 ×101
mean 5.887766 ×1021.887098 ×1011.881844 ×1021.003350 ×1021.606658 ×1029.665791 ×102
std 1.717478 ×1035.056921 ×1034.230737 ×1042.412656 ×1043.636407 ×1041.886442 ×103
DEBA [28] best 5.858312 ×1021.958779 ×1011.797733 ×1029.632554 ×1031.541447 ×1029.078063 ×102
worst 6.805128 ×1022.566962 ×1012.194973 ×1021.144459 ×1021.824156 ×1021.096576 ×101
mean 6.306158 ×1022.287206 ×1012.005007 ×1021.048558 ×1021.700868 ×1021.008133 ×101
std 2.059708 ×1031.384008 ×1028.428458 ×1044.319549 ×1047.193521 ×1044.457879 ×103
ES [24] best 3.634854 ×1021.053634 ×1011.178783 ×1026.152581 ×1039.742411 ×1036.028495 ×102
worst 3.704455 ×1021.076016 ×1011.197536 ×1026.272540 ×1039.921388 ×1036.120127 ×102
mean 3.662145 ×1021.064150 ×1011.189432 ×1026.206519 ×1039.813727 ×1036.070549 ×102
std 1.618502 ×1044.726931 ×1044.687831 ×1052.718416 ×1054.560503 ×1052.303572 ×104
Table 14. Statistical results for numerical integrations (F07–F10).
Algorithms Integrations
F07 F08 F09 F10 (m = 10) F10 (m = 20) F10 (m = 30)
AOA best 2.295489 1.962088 ×1024.592313 ×1032.437873 2.499438 ×1012.167574 ×101
worst 2.524012 2.400262 ×1025.421672 ×1033.611012 3.429053 3.115022
mean 2.424997 2.226327 ×1025.031127 ×1033.225836 1.617425 9.721188 ×101
std 5.634089 ×1021.017542 ×1032.167135 ×1042.620454 ×1019.081448 ×1017.417795 ×101
IAOA best 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
worst 4.285648 ×1049.665730 ×1067.650313 ×1094.941453 ×1048.932970 ×1044.121824 ×104
mean 5.817808 ×1051.079836 ×1061.094646 ×1096.843408 ×1059.159354 ×1056.487479 ×105
std 9.331558 ×1052.377176 ×1062.051844 ×1091.219906 ×1041.972260 ×1049.370544 ×105
PSO [25] best 1.093717 1.499542 ×1023.212480 ×1035.688245 ×1011.024550 8.920294 ×101
worst 2.077297 2.010782 ×1024.674802 ×1031.599995 1.485451 1.953066
mean 1.669071 1.706272 ×1023.539538 ×1038.668366 ×1011.219538 1.489201
std 2.419795 ×1011.205259 ×1033.409595 ×1042.759571 ×1011.216184 ×1012.065585 ×101
DE [27] best 2.255785 2.091958 ×1024.575317 ×1032.543013 3.461794 3.889322
worst 2.522405 2.254710 ×1025.009106 ×1033.236645 4.684467 5.201887
mean 2.424488 2.177702 ×1024.795040 ×1033.015091 4.242609 4.687029
std 5.766110 ×1024.602533 ×1041.146454 ×1041.967397 ×1012.313007 ×1012.923496 ×101
DEBA [28] best 2.361570 ×1012.057410 ×1024.776881 ×1036.043389 ×1014 1.208677 ×1013 5.319404 ×1013
worst 2.468831 2.474051 ×1025.441200 ×1036.043389 ×1014 1.208677 ×1013 5.319404 ×1013
mean 1.163514 2.294436 ×1025.157892 ×1036.043389 ×1014 1.208677 ×1013 5.319404 ×1013
std 6.919695 ×1019.765442 ×1041.475304 ×1043.851264 ×1029 7.702528 ×1029 3.081011 ×1028
ES [24] best 1.298269 1.319474 ×1023.051746 ×1031.460773 1.634373 1.152204
worst 1.321623 1.341748 ×1023.121709 ×1031.665912 2.355153 2.380726
mean 1.308546 1.331615 ×1023.081151 ×1031.568781 1.869004 1.719830
std 5.523404 ×1035.640941 ×1051.521690 ×1054.627499 ×1021.831224 ×1012.898513 ×101
Mathematics 2022,10, 2152 21 of 27
Figure 5. Cont.
Mathematics 2022,10, 2152 22 of 27
Figure 5. Convergence curve for the numerical integrations (F01–F10 (al)).
4.4. Sovling Engineering Problem
Compared with three-dimensional motion, planar motion restricts the robot to a single
plane and is simpler to calculate. However, most robot mechanisms can simplify plane
mechanisms or planes for tackling. Now, the robotic arm plays an increasingly important
role, which has also attracted the extensive attention of researchers. Improving the working
efficiency of the robotic arm under the premise of low energy consumption is a challenging
problem facing the industrial field [
57
]. The kinematics of the robotic arm mainly include
Mathematics 2022,10, 2152 23 of 27
forward kinematics and inverse kinematics. One is the pose of the end effector determined
according to the rotation angle of each joint based on the base coordinates; the other is
taking the end joint as the starting point and, finally, back-to-base coordinates. The inverse
kinematics problem is essentially a nonlinear equation problem. The tasks performed by
the robotic arm are usually described by its base coordinate system in practical applications.
Therefore, the inverse kinematics solution is particularly important in the field of the
control. The robotic arm model [
58
] is shown in Figure 6a, and the mathematical model
in coordinates is shown in Figure 6b. The nonlinear equation system for this model is
as follows.
10, 000 ×((a×sin(A2)b×sin(A2+B2) + c×sin(A2+B2+C2)X)2) = 0
10, 000 ×((ha×cos(A2)b×cos(A2+B2) + c×cos(A2+B2+C2)Y)2) = 0
|A2A1|+|B2B1|+|C2C1|=0
(25)
where a= 16.5 cm; b= 7.9 cm; c= 5.3 cm; and h= 7.4 cm (
A1
= 150
,
B1
= 132.7026
, and
C1
= 127.0177
) are the initial angles of the three joints; (X= 10 cm, Y= 10 cm) is the
coordinate of the end effector; and (
A2
,
B2
, and
C2
) are the aims required to obtain three
joint angles in the final stage. The first two equations in the nonlinear equation system find
the three joint angles when the end effector reaches the target position (X,Y), and the third
equation ensures that the change of the joint angle is the smallest to meet the requirements
for saving energy.
Figure 6. (a) The model of a robotic arm, and (b) a mathematical model for a robotic arm.
Tables 1518 demonstrate that the IAOA obtains the closest results to the initial angle
compared with the PSO, GA and PSSA in solving the inverse kinematics problem of the
robotic arm. This shows that the method proposed in this paper allows the robotic arm to
consume less energy during movement. In Table 19,frepresents the fitness value obtain
by Equation (25) and is the difference between the final angle and initial angle of the
joint. Obviously, the IAOA achieves the best results for both evaluations. Therefore, it is
a great significance to the stability, operation efficiency, operation accuracy, and energy
consumption of the robotic arm trajectory control. A new method is provided for the
inverse motion solution, which makes up for the deficiency of the traditional method.
Mathematics 2022,10, 2152 24 of 27
Table 15. The results obtained by the IAOA for the engineering problem.
Algorithm Joint Angles
A2B2C2
IAOA initial angle 150 132.7026 127.0177
Result 145.7291 139.0180 123.9864
Table 16. The results obtained by the PSO for the engineering problem.
Algorithm Joint Angles
A2B2C2
PSO initial angle 150 132.7026 127.0177
result 139.6534 68.2235 96.4886
Table 17. The results obtained by the GA for the engineering problem.
Algorithm Joint Angles
A2B2C2
GA initial angle 150 132.7026 127.0177
result 129.8653 118.9625 52.6691
Table 18. The results obtained by the PSSA for the engineering problem.
Algorithm Joint Angles
A2B2C2
PSSA [58] initial angle 150 132.7026 127.0177
result 147.1015 92.5371 89.5116
Table 19. Comparison of the experimental results for the IAOA, PSO, GA, and PSSA.
Objective Funtions Algorithms
IAOA PSO GA PSSA
f
1.3618
×
10
3.0608 ×1063.2329 ×1062.0199 ×105
|A2A1|+|B2B1|+|C2C1|13.6176 105.3548 118.2234 80.5701
5. Conclusions and Future Works
In this paper, the shortcomings are analyzed of the traditional AOA so that an im-
proved AOA based on a population control strategy is proposed to overcome the weakness.
The algorithm can find the best global value faster by classifying the population and adap-
tively controlling the number of individuals in each subpopulation. This method effectively
enhances the information sharing strength between individuals, can better search the space,
avoids falling into the local optimum, accelerates the convergence process, and improves
the optimization accuracy. The AOA, IAOA, and some other algorithms are compared
based on solving 6 nonlinear systems of equations, 10 numerical integrations, and an engi-
neering problem. The experimental results show that the IAOA can solve these problems
well and outperform the other algorithms. In the future, the IAOA can be used to solve
more nonlinear problems in practical engineering applications. Secondly, it can try to
solve multiple integrals. Finally, the algorithm can be further improved and enhanced in
its performance.
Author Contributions:
Conceptualization and methodology, M.C. and Y.Z.; software, M.C.; writing—
original draft preparation, M.C.; writing—review and editing, Y.Z. and Q.L.; and funding acquisition,
Y.Z. All authors have read and agreed to the published version of the manuscript.
Mathematics 2022,10, 2152 25 of 27
Funding:
This research was funded by the National Natural Science Foundation of China, Grant No.
U21A20464 and 62066005.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Broyden, C.G. A class of methods for solving nonlinear simultaneous equations. Math. Comput. 1965,19, 577–593. [CrossRef]
2.
Ramos, H.; Monteiro, M.T.T. A new approach based on the newton’s method to solve systems of nonlinear equations. J. Comput.
Appl. Math. 2017,318, 3–13. [CrossRef]
3.
Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Modified newton’s method for systems of nonlinear equations with singular Jacobian. J.
Comput. Appl. Math. 2009,224, 77–83. [CrossRef]
4.
Luo, Y.Z.; Tang, G.J.; Zhou, L.N. Hybrid approach for solving systems of nonlinear equations using chaos optimization and
quasi-newton method. Appl. Soft Comput. 2008,8, 1068–1073. [CrossRef]
5.
Karr, C.L.; Weck, B.; Freeman, L.M. Solutions to systems of nonlinear equations via a genetic algorithm. Eng. Appl. Artif. Intell.
1998,11, 369–375. [CrossRef]
6.
Ouyang, A.J.; Zhou, Y.Q.; Luo, Q.F. Hybrid particle swarm optimization algorithm for solving systems of nonlinear equations.
In Proceedings of the 2009 IEEE International Conference on Granular Computing, Nanchang, China, 17–19 August 2009;
pp. 460–465.
7.
Jaberipour, M.; Khorram, E.; Karimi, B. Particle swarm algorithm for solving systems of nonlinear equations. Comput. Math. Appl.
2011,62, 566–576. [CrossRef]
8.
Pourjafari, E.; Mojallali, H. Solving nonlinear equations systems with a new approach based on invasive weed optimization
algorithm and clustering. Swarm Evol. Comput. 2012,4, 33–43. [CrossRef]
9.
Jia, R.M.; He, D.X. Hybrid artificial bee colony algorithm for solving nonlinear system of equations. In Proceedings of the
2012 Eighth International Conference on Computational Intelligence and Security, Guangzhou, China, 17–18 November 2012;
pp. 56–60.
10.
Ren, H.M.; Wu, L.; Bi, W.H.; Argyros, I.K. Solving nonlinear equations system via an efficient genetic algorithm with symmetric
and harmonious individuals. Appl. Math. Comput. 2013,219, 10967–10973. [CrossRef]
11. Cai, R.Z.; Yue, G.L. A novel firefly algorithm of solving nonlinear equation group. Appl. Mech. Mater. 2013,389, 918–923.
12.
Abdollahi, M.; Isazadeh, A.; Abdollahi, D. Imperialist competitive algorithm for solving systems of nonlinear equations. Comput.
Math. Appl. 2013,65, 1894–1908. [CrossRef]
13.
Hirsch, M.J.; Pardalos, P.M.; Resende, M.G.C. Solving systems of nonlinear equations with continuous GRASP. Nonlinear Anal.
Real World Appl. 2009,10, 2000–2006. [CrossRef]
14.
Sacco, W.F.; Henderson, N. Finding all solutions of nonlinear systems using a hybrid metaheuristic with fuzzy clustering means.
Appl. Soft Comput. 2011,11, 5424–5432. [CrossRef]
15.
Gong, W.Y.; Wang, Y.; Cai, Z.H.; Yang, S. A weighted bi-objective transformation technique for locating multiple optimal solutions
of nonlinear equation systems. IEEE Trans. Evol. Comput. 2017,21, 697–713. [CrossRef]
16.
Ariyaratne, M.K.A.; Fernando, T.G.I.; Weerakoon, S. Solving systems of nonlinear equations using a modified firefly algorithm
(MODFA). Swarm Evol. Comput. 2019,48, 72–92. [CrossRef]
17.
Gong, W.Y.; Wang, Y.; Cai, Z.H.; Wang, L. Finding multiple roots of nonlinear equation systems via a repulsion-based adaptive
differential evolution. IEEE Trans. Syst. Man Cybern. Syst. 2020,50, 1499–1513. [CrossRef]
18.
Ibrahim, A.M.; Tawhid, M.A. A hybridization of differential evolution and monarch butterfly optimization for solving systems of
nonlinear equations. J. Comput. Des. Eng. 2019,6, 354–367. [CrossRef]
19.
Liao, Z.W.; Gong, W.Y.; Wang, L. Memetic niching-based evolutionary algorithms for solving nonlinear equation system. Expert
Syst. Appl. 2020,149, 113–261. [CrossRef]
20.
Ning, G.Y.; Zhou, Y.Q. Application of improved differential evolution algorithm in solving equations. Int. J. Comput. Intell. Syst.
2021,14, 199. [CrossRef]
21.
Rizk-Allah, R.M. A quantum-based sine cosine algorithm for solving general systems of nonlinear equations. Artif. Intell. Rev.
2021,54, 3939–3990. [CrossRef]
22.
Ji, J.Y.; Man, L.W. An improved dynamic multi-objective optimization approach for nonlinear equation systems. Inf. Sci.
2021
,
576, 204–227. [CrossRef]
23.
Turgut, O.E.; Turgut, M.S.; Coban, M.T. Chaotic quantum behaved particle swarm optimization algorithm for solving nonlinear
system of equations. Comput. Math. Appl. 2014,68, 508–530. [CrossRef]
24.
Zhou, Y.Q.; Zhang, M.; Zhao, B. Numerical integration of arbitrary functions based on evolutionary strategy method. Chin. J.
Comput. 2008,21, 196–206.
Mathematics 2022,10, 2152 26 of 27
25.
Wei, X.Q.; Zhou, Y.Q. Research on numerical integration method based on particle swarm optimization. Microelectron. Comput.
2009,26, 117–119.
26.
Wei, X.X.; Zhou, Y.Q.; Lan, X.L. Research on a numerical integration method based on functional networks. Comput. Sci.
2009
,36,
224–226.
27.
Deng, Z.X.; Huang, F.D.; Liu, X.J. A differential evolution algorithm for solving numerical integration problems. Comput. Eng.
2011,37, 206–207.
28. Xiao, H.H.; Duan, Y.M. Application of improved bat algorithm in numerical integration. J. Intell. Syst. 2014,9, 364–371.
29.
Szczepanski, R.; Kaminski, M.; Tarczewski, T. Auto-tuning process of state feedback speed controller applied for two-mass system.
Energies 2020,13, 3067. [CrossRef]
30.
Hu, H.B.; Hu, Q.B.; Lu, Z.Y.; Xu, D. Optimal PID controller design in PMSM servo system via particle swarm optimization. In
Proceedings of the 31st Annual Conference of IEEE Industrial Electronics Society, IECON 2005, Raleigh, NC, USA, 6–10 November
2005; p. 5.
31.
Szczepanski, R.; Tarczewski, T.; Niewiara, L.J.; Stojic, D. Isdentification of mechanical parameters in servo-drive system. In
Proceedings of the 2021 IEEE 19th International Power Electronics and Motion Control Conference (PEMC), Gliwice, Poland,
25–29 April 2021; pp. 566–573.
32.
Liu, L.; Cartes, D.A.; Liu, W. Particle Swarm Optimization Based Parameter Identification Applied to PMSM. In Proceedings of
the 2007 American Control Conference, New York, NY, USA, 9–13 July 2007; pp. 2955–2960.
33.
Szczepanski, R.; Tarczewski, T. Global path planning for mobile robot based on artificial bee colony and Dijkstra’s algorithms. In
Proceedings of the 2021 IEEE 19th International Power Electronics and Motion Control Conference (PEMC), Gliwice, Poland,
25–29 April 2021; pp. 724–730.
34. Brand, M.; Masuda, M.; Wehner, N.; Yu, X.H. Ant colony optimization algorithm for robot path planning. In Proceedings of the
2010 International Conference on Computer Design and Applications, Qinhuangdao, China, 25–27 June 2010; pp. 436–440.
35.
Szczepanski, R.; Erwinski, K.; Tejer, M.; Bereit, A.; Tarczewski, T. Optimal scheduling for palletizing task using robotic arm and
artificial bee colony algorithm. Eng. Appl. Artif. Intell. 2022,113, 104976. [CrossRef]
36.
Kolakowska, E.; Smith, S.F.; Kristiansen, M. Constraint optimization model of a scheduling problem for a robotic arm in automatic
systems. Robot. Auton. Syst. 2014,62, 267–280. [CrossRef]
37.
Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods
Appl. Mech. Eng. 2021,376, 113609. [CrossRef]
38.
Premkumar, M.; Jangir, P.; Kumar, D.S.; Sowmya, R.; Alhelou, H.H.; Abualigah, L.; Yildiz, A.R.; Mirjalili, S. A new arithmetic
optimization algorithm for solving real-world multi-objective CEC-2021 constrained optimization problems: Diversity analysis
and validations. IEEE Access 2021,9, 84263–84295. [CrossRef]
39.
Bansal, P.; Gehlot, K.; Singhal, A.; Gupta, A. Automatic detection of osteosarcoma based on integrated features and feature
selection using binary arithmetic optimization algorithm. Multimed. Tools Appl. 2022,81, 8807–8834. [CrossRef]
40.
Agushaka, J.O.; Ezugwu, A.E. Advanced arithmetic optimization algorithm for solving mechanical engineering design problems.
PLoS ONE 2021,16, e0255703.
41.
Abualigah, L.; Diabat, A.; Sumari, P.; Gandomi, A. A novel evolutionary arithmetic optimization algorithm for multilevel
thresholding segmentation of COVID-19 CT images. Processes 2021,9, 1155. [CrossRef]
42.
Xu, Y.P.; Tan, J.W.; Zhu, D.J.; Ouyang, P.; Taheri, B. Model identification of the proton exchange membrane fuel cells by extreme
learning machine and a developed version of arithmetic optimization algorithm. Energy Rep. 2021,7, 2332–2342. [CrossRef]
43.
Izci, D.; Ekinci, S.; Kayri, M.; Eker, E. A novel improved arithmetic optimization algorithm for optimal design of PID controlled
and Bode’s ideal transfer function-based automobile cruise control system. Evol. Syst. 2021,13, 453–468. [CrossRef]
44.
Khatir, S.; Tiachacht, S.; Thanh, C.L.; Ghandourah, E.; Mirjalili, S.; Wahab, M.A. An improved artificial neural network using
arithmetic optimization algorithm for damage assessment in FGM composite plates. Compos. Struct.
2021
,273, 114–287. [CrossRef]
45.
Viswanathan, G.M.; Afanasyev, V.; Buldyrev, S.; Murphy, E.J.; Prince, P.A.; Stanley, H.E. Lévy flight search patterns of wandering
albatrosses. Nature 1996,381, 413–415. [CrossRef]
46.
Humphries, N.E.; Queiroz, N.; Dyer, J.R.; Pade, N.G.; Musyl, M.K.; Schaefer, K.M.; Fuller, D.W.; Brunnschweiler, J.M.; Doyle, T.K.;
Houghton, J.D.; et al. Environmental context explains Lévy and Brownian movement patterns of marine predators. Nature
2010
,
465, 1066–1069. [CrossRef]
47. Mirjalili, S. A sine cosine Algorithm for solving optimization problems. Knowl. Based Syst. 2016,96, 120–133. [CrossRef]
48. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016,95, 51–67. [CrossRef]
49. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014,69, 46–61. [CrossRef]
50.
Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications.
Future Gener. Comput. Syst. 2019,97, 849–872. [CrossRef]
51.
Li, S.M.; Chen, H.L.; Wang, M.J.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization.
Future Gener. Comput. Syst. 2020,111, 300–323. [CrossRef]
52.
Price, K.V. Differential evolution: A fast and simple numerical optimizer. In Proceedings of the North American Fuzzy Information
Processing, Berkeley, CA, USA, 19–22 June 1996; pp. 524–527.
53.
Gandomi, A.H.; Yang, X.S.; Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization
problems. Eng. Comput. 2013,29, 17–35. [CrossRef]
Mathematics 2022,10, 2152 27 of 27
54.
Grosan, C.; Abraham, A. A new approach for solving nonlinear equations systems. IEEE Trans. Syst. Man Cybern. Part A Syst.
Hum. 2008,38, 698–714. [CrossRef]
55.
Floudas, C.A. Recent advances in global optimization for process synthesis, design and control: Enclosure of all solutions. Comput.
Chem. Eng. 1999,23, S963–S973. [CrossRef]
56.
Nikkhah-Bahrami, M.; Oftadeh, R. An effective iterative method for computing real and complex roots of systems of nonlinear
equations. Appl. Math. Comput. 2009,215, 1813–1820. [CrossRef]
57. Ding, X. Robot Control Research; Zhejiang University Press: Hangzhou, China, 2006; pp. 37–38.
58.
Xiang, Z.H.; Zhou, Y.Q.; Luo, Q.F.; Wen, C. PSSA: Polar coordinate salp swarm algorithm for curve design problems. Neural
Process Lett. 2020,52, 615–645. [CrossRef]
... The experimental results revealed that the performance of AOA-FS is superior to the other algorithms over benchmark test functions. Chen et al. [30] suggested an adaptive population strategy to improve the performance of AOA. This strategy is known as IAOA-PS. ...
... • Improved accuracy • Avoid the premature convergence [30] IMAOA Triangular mutation strategy, dynamic inertia weight, and mutation probability coefficients are employed to improve the search process. ...
... The number of iterations should be small (i.e., 100) for optimal solutions. The population decomposition concept was introduced in [30]. In [30], different update mechanisms were used to modify the position of each individual in a subpopulation. ...
Article
Full-text available
Arithmetic optimization algorithm (AOA) is a population-based metaheuristic algorithm that mimics the properties of primitive arithmetic operators. Researchers were attracted to AOA after its development in 2021. It has been extensively applied across several research domains. This study presents a thorough analysis of the AOA. The mathematical modelling of AOA and its inspiration are discussed. The variants of AOA namely improved, binary, chaotic, hybrid, and multi-objective are investigated in detail. The applications of AOA and its variants in numerous research domain are discussed. The possible research directions for AOA are examined. The young researchers will be assisted by this study in comprehending the fundamental ideas of AOA and in applying this knowledge to their own research issues.
... An enhanced version of the AOA called the IAOA algorithm, was introduced by Chen et al. (2022). The aim is to improve the ability to solve numerical optimization problems. ...
Article
Full-text available
The patient admission scheduling problem (PASP) has been studied for many years as one of the most important scheduling problems in the health sector. The primary goal of PASP is to assign patients to appropriate hospital beds while considering some hard and soft constraints. PASP is an NP-hard problem, which implies that the optimization approach is one of the best approaches that can be used to solve it. The arithmetic optimization algorithm (AOA) is a new optimization algorithm that can effectively solve both continuous and discrete optimization problems. However, it may suffer from poor exploration and premature convergence to sub-optimal solutions due to some problems with its exploration operators. In this paper, we propose a new method for the PASP using an improved AOA algorithm called the island-based AOA (iAOA). The new algorithm is based on a structured population model called the island model. This model distributes the population of candidate solutions among islands that periodically exchange some of the candidate solutions among each other based on a migration protocol. In iAOA, some modifications were applied to AOA’s parameters to make it capable of handling discrete space. We evaluated iAOA using a public benchmark for the PASP and compared our results with those of other baseline algorithms (DFPA, SA, HS, GD-NLGD, HTS, TS, I-BBO, and MBBO). The simulation results revealed that iAOA achieved the minimum average execution time on 5 out of 6 instances of the public benchmark. It also showed that iAOA achieved the second minimum average violation of the objective function over 10 independent runs. Finally, the statistical examination of the experimental results using Friedman and Wilcoxon tests confirms the significance of the results of iAOA compared with the other baseline algorithms.
... In DAOA, dynamic acceleration function (DAF) without adjusting the maximum and minimum values was introduced to replace MOA, and dynamic candidate solution function (DCS) was used to replace MOP, which speed up the convergence of the algorithm. Chen et al. [39] developed a variant of AOA called IAOA. In IAOA, population control strategy was integrated to enhance population diversity, strengthen the exchange of useful information on each other, and find the global optimal solution faster. ...
Article
Full-text available
Arithmetic optimization algorithm (AOA) is a meta-heuristic optimization method based on mathematical operators proposed in recent years. Although it has good performance, it can also lead to insufficient local search ability and falling into local optima when solving complex optimization problems. In order to make up for the above shortcomings, the optimization performance of AOA is further improved. This paper proposes a hybrid algorithm based on AOA and particle swarm optimization (PSO) called HAOAPSO. Firstly, a compound opposition-based learning (COBL) strategy is introduced to broaden the scope of finding optimal solutions to help the algorithm better jump out of local optima. Secondly, PSO is combined with AOA that integrates COBL to improve the algorithm’s local search ability, so as to improve the overall search efficiency of the algorithm. In addition, experiments are performed on 23 classical benchmark functions with different characteristics and five engineering design optimization problems, and the experimental results of HAOAPSO are compared with those of other well-known optimization algorithms to comprehensively evaluate the performance of the proposed algorithm. The simulation results show that HAOAPSO can provide better solutions in most cases when solving global optimization problems such as engineering, with better convergence speed and accuracy.
... Izci et al. proposed a hybrid arithmetic optimization algorithm incorporating a Nelder-Mead simplex search for the optimal design of automotive cruise control systems [20]. Chen et al. proposed an improved algorithmic optimization algorithm based on a population control strategy that classifies populations and adaptively controls the number of individuals in subpopulations, effectively using information about each individual to improve the accuracy of the solution [21]. Davut et al. modified the basic opposites learning mechanism and applied it to enhance the population diversity of arithmetic optimization algorithms [22]. ...
Article
Full-text available
The Arithmetic Optimization Algorithm (AOA) is a meta-heuristic algorithm inspired by mathematical operators, which may stagnate in the face of complex optimization issues. Therefore, the convergence and accuracy are reduced. In this paper, an AOA variant called ASFAOA is proposed by integrating a double-opposite learning mechanism, an adaptive spiral search strategy, an offset distribution estimation strategy, and a modified cosine acceleration function formula into the original AOA, aiming to improve the local exploitation and global exploration capability of the original AOA. In the proposed ASFAOA, a dual-opposite learning strategy is utilized to enhance population diversity by searching the problem space a lot better. The spiral search strategy of the tuna swarm optimization is introduced into the addition and subtraction strategy of AOA to enhance the AOA’s ability to jump out of the local optimum. An offset distribution estimation strategy is employed to effectively utilize the dominant population information for guiding the correct individual evolution. In addition, an adaptive cosine acceleration function is proposed to perform a better balance between the exploitation and exploration capabilities of the AOA. To demonstrate the superiority of the proposed ASFAOA, two experiments are conducted using existing state-of-the-art algorithms. First, The CEC 2017 benchmark function was applied with the aim of evaluating the performance of ASFAOA on the test function through mean analysis, convergence analysis, stability analysis, Wilcoxon signed rank test, and Friedman’s test. The proposed ASFAOA is then utilized to solve the wireless sensor coverage problem and its performance is illustrated by two sets of coverage problems with different dimensions. The results and discussion show that ASFAOA outperforms the original AOA and other comparison algorithms. Therefore, ASFAOA is considered as a useful technique for practical optimization problems.
... There are still limitations to the original AOA. For instance, getting stuck in a local optimum is easily done because location updates based on the ideal value, premature convergence, and low solution accuracy must be addressed (Chen et al., 2022). When dealing with multi-dimensional optimization problems, the AOA suffers from insufficient investigation and premature convergence to sub-optimal solutions (Kaveh et al., 2021). ...
Article
Full-text available
The importance of using renewable energy systems (RESs) worldwide has been consolidated. Moreover, connecting more RESs to the utility grid will lead to more technical problems. Photovoltaic (PV) and wind turbine (WT) based power plants are the most nonlinear sources of renewable energies contributing to the energy mix Electronic ballast and switching mode power supply in energy conservation of the PV and WT have caused power quality problems and less reliable output voltage. PV power plants are becoming increasingly integrated with the utility grid by onboarding certain superior power quality features. This grid integration drastically reduces the use of fossil fuels and prevents environmental hazards. This article presents the design of a 26 MWp grid-connected PV power plant, which is already tied to the Egyptian electrical network in Fares City, Kom Ombo Center, Aswan Governorate, Egypt The 26 MWp PV power plant consists of (11) blocks and the utility grid, which are simulated using Matlab/Simulink. Every block contains 2,376 kWp PV arrays connected directly to DC-DC boost converters to regulate the output DC power generated by each PV array. This output DC power is fed into a particular type of inverter called a "central inverter", which converts it to AC power. In some cases, higher harmonic distortion at the grid and a greater negative impact on the power system performance occur when using this type of inverter. To optimize the gains of the proportional-integral (PI) controller for both the voltage and current regulators of this central inverter, meta-heuristic optimization techniques (MOTs) are used. During this article, Gray Wolf Optimization (GWO), Harris Hawks Optimization (HHO), and Arithmetic Optimization Algorithm (AOA) are applied as MOTs to enhance the quality of the power and voltage in addition to limiting the total harmonic distortions (THD) under the effect of different sunlight conditions and partial shading. As a result, the AOA-based controllers are found to show outstanding results and superior performance compared to GWO and HHO regarding solution quality and computational efficiency. Finally, MOTs are the best solution to most electrical problems regarding controlling nonlinear and high-penetration systems, such as PV power plants connected to the utility grid.
... The experimental findings demonstrated that the suggested method performed better in terms of precision, convergence rate, and solution quality than comparing methods. In order to solve numerical optimization problems, Chen et al. [100] presented an improved AOA method based on the population control mechanism. The information of every individual can be employed efficaciously by categorizing the population and dynamically regulating the numbers of individuals in each subpopulation. ...
Article
Arithmetic Optimization Algorithm (AOA) is a recently developed population-based nature-inspired optimization algorithm (NIOA). AOA is designed under the inspiration of the distribution behavior of the main arithmetic operators in mathematics and hence, it also belongs to mathematics-inspired optimization algorithm (MIOA). MIOA is a powerful subset of NIOA and AOA is a proficient member of it. AOA is published in early 2021 and got a massive recognition from research fraternity due to its superior efficacy in different optimization fields. Therefore, this study presents an up-to-date survey on AOA, its variants, and applications.
Article
Full-text available
The uncertainties associated with multi-area power systems comprising both thermal and distributed renewable generation (DRG) sources such as solar and wind necessitate the use of an efficient load frequency control (LFC) technique. Therefore, a hybrid version of two metaheuristic algorithms (arithmetic optimization and African vulture's optimization algorithm) is developed. It is called the ‘arithmetic optimized African vulture's optimization algorithm (AOAVOA)’. This algorithm is used to tune a novel type-2 fuzzy-based proportional-derivative branched with dual degree-of-freedom proportional-integral-derivative controller for the LFC of a three-area hybrid deregulated power system. Thermal, electric vehicle (EV), and DRG sources (including a solar panel and a wind turbine system) are connected in area-1. Area-2 involves thermal and gas-generating units (GUs), while thermal and geothermal GUs are linked in area-3. Practical restrictions such as thermo-boiler dynamics, thermal-governor dead band, and generating rate constraints are also investigated. The proposed LFC method is compared to other controllers and optimizers to demonstrate its superiority in rejecting step and random load disturbances. By functioning as energy storage elements, EVs and DRG units can enhance dynamic responses during peak demand. As a result, the effect of the aforementioned units on dynamic reactions is also investigated. To validate its effectiveness, the closed-loop system is subjected to robust stability analysis and is compared to various existing control schemes from the literature. It is determined that the suggested AOAVOA improves fitness by 40.20% over the arithmetic optimizer (AO), while frequency regulation is improved by 4.55% over an AO-tuned type-2 fuzzy-based branched controller.
Article
Machine learning as a subset of artificial intelligence presents a promising set of algorithms for tackling increasingly complex challenges. A notable ability of this subgroup of algorithms to tackle tasks without explicit programming coupled with the expanding availability of computational resources and information transparency has made it possible to utilize algorithms to forecast prices. In recent years, cryptocurrency has increased in popularity and has seen wider adoption as a payment method. Cryptocurrency trading and mining have become a potentially very lucrative venture. However, due to the instability of cryptocurrency prices, casting accurate predictions can be quite challenging. A novel way of approaching this challenge is by tackling it through time-series forecasting. A particularly promising method for tackling this type of problem is through the utilization of long-short-term memory artificial neural networks to attain accurate prediction results. However, the forecasting accuracy of machine learning models is highly dependent on adequate hyperparameter settings. Thus, this work presents an improved variation of the arithmetic optimization algorithm, tasked with selecting the best values of a long-short term neural network casting price predictions. The presented approach has been evaluated on publicly available real-world Ethereum trading price data. The attained results of a comparative analysis against several popular metaheuristics indicate that the presented method achieved excellent results, and outperformed aforementioned algorithms in one and four-step ahead predictions.
Chapter
Machine learning as a subset of artificial intelligence presents a promising set of algorithms with an ability to gather experience and learn from provided data. This coupled with the expanding availability of computational resources and information transparency has made it possible to utilize algorithms to forecast prices. In recent years, cryptocurrency has increased in popularity and has seen wider adoption as a payment method. However, due to the volatile nature of the cryptocurrency market, casting accurate predictions can be quite challenging. One promising approach is the application of long-short-term memory artificial neural networks to time-series price data to attain results. The forecasting accuracy of machine learning models is highly dependent on adequate hyperparameter settings. Thus, this work, an improved version of the arithmetic optimization algorithm, is tasked with selecting optimal values of a long-short term network casting price predictions. The proposed approach has been tested on publicly available real-world Ethereum trading price data and according to the results of comparative analysis with other contemporary metaheuristics, it has been concluded that the proposed method achieved excellent results, and outperformed aforementioned algorithms in one and four-step ahead predictions.KeywordsCryptocurrencyEthereumNeural networksArithmetic optimization algorithmMachine learning
Article
Full-text available
Palletizing using robotic arms is a common aspect of industrial robotization. Due to its efficiency, the robotic arm is often able to handle more then one production line. In such a case, the proper decision of selecting an item from one of several production lines will affect the overall efficiency. In this paper, three production lines handled by a single robotic arm are considered. Cycle time and maximum allowable waiting time of each item is taken into account. The authors proposed four different objective functions related to possible requirements in a factory environment, which led to constrained multi-objective optimization problems. To solve such a problem, the Artificial Bee Colony algorithm supported by Deb's rules has been applied. The obtained results have been compared with three basic decision mechanisms , and also with the Reinforcement Learning approach. It was shown that the proposed approach significantly increases the production rate and satisfies the particular requirements, i.e., minimum energy per palletized item ratio, equality of containers' filling.
Article
Full-text available
Osteosarcoma is one of the most common malignant bone tumors mostly found in children and teenagers. Manual detection of osteosarcoma requires expertise and it is a labour-intensive process. If detected on time, the mortality rate can be reduced. With the advent of new technologies, automatic detection systems are used to analyse and classify medical images, which reduces the dependency on experts and leads to faster processing. In this paper, an automatic detection system: Integrated Features-Feature Selection Model for Classification (IF-FSM-C) to detect osteosarcoma from the high-resolution whole slide images (WSIs) is proposed. The novelty of the proposed approach is the use of integrated features obtained by fusion of features extracted using traditional handcrafted (HC) feature extraction techniques and deep learning models (DLMs) namely EfficientNet-B0 and Xception. To further improve the performance of the proposed system, feature selection (FS) is performed. Here, two binary variants of recently proposed Arithmetic Optimization Algorithm (AOA) known as BAOA-S and BAOA-V are proposed to perform FS. The selected features are given to a classifier that classifies the WSIs into Viable tumor (VT), Non-viable tumor (NVT) and non-tumor (NT). Experiments are performed to compare the performance of proposed IF-FSM-C to the classifiers which use HC or deep learning features alone as well as state-of-the-art methods for osteosarcoma detection. The best overall accuracy of 96.08% is obtained when integrated features extracted using HC techniques and Xception are used. The overall accuracy is enhanced to 99.54% after applying BAOA-S for FS. Further, the application of BAOA-S for FS reduces the number of features with the best model having only 188 features compared to 2118 features if no FS is applied.
Article
Full-text available
The problem of finding roots of equations has always been an important research problem in the fields of scientific and engineering calculations. For the standard differential evolution algorithm cannot balance the convergence speed and the accuracy of the solution, an improved differential evolution algorithm is proposed. First, the one-half rule is introduced in the mutation process, that is, half of the individuals perform differential evolutionary mutation, and the other half perform evolutionary strategy reorganization, which increases the diversity of the population and avoids premature convergence of the algorithm; Second, set up an adaptive mutation operator and a crossover operator to prevent the algorithm from falling into the local optimum and improve the accuracy of the solution. Finally, classical high-order algebraic equations and nonlinear equations are selected for testing, and compared with other algorithms. The results show that the improved algorithm has higher solution accuracy and robustness, and has a faster convergence speed. It has outstanding effects in finding roots of equations, and provides an effective method for engineering and scientific calculations.
Article
Full-text available
This paper considers the development of a novel hybrid metaheuristic algorithm which is proposed to achieve an optimum design for automobile cruise control (ACC) system by using a proportional-integral-derivative (PID) controller based on Bode's ideal transfer function. The developed algorithm (AOA-NM) adopts one of the recently published metaheuristic algorithms named the arithmetic optimization algorithm (AOA) to perform explorative task whereas another well-known local search method known as Nelder-Mead (NM) simplex search to perform exploitative task. The developed hybrid algorithm was initially tested on well-known benchmark functions by comparing the results with only its original version since AOA has already been shown to be better than other state-of-the-art algorithms. The statistical results obtained from benchmark functions have demonstrated better capability of AOA-NM. Furthermore, a PID controller based on Bode's ideal transfer function was adopted to regulate an ACC system optimally. Statistical, convergence rate, time domain and frequency domain analyses were performed by comparing the performance of AOA-NM with AOA. The respective analyses have shown better capability of the proposed hybrid algorithm. Moreover, the capability of the proposed AOA-NM based PID control scheme was compared with other available approaches in the literature by using time domain analysis. The latter case has also confirmed enhanced capability of the proposed approach for regulating an ACC system which further verified the ability of the proposed AOA-NM algorithm. Lastly, other recently reported and effective metaheuristic algorithms were also used to assess the performance of the proposed approach. The obtained comparative results further confirmed the AOA-NM to be a greater tool to achieve more successful results for ACC system.
Article
Full-text available
The distributive power of the arithmetic operators: multiplication, division, addition, and subtraction, gives the arithmetic optimization algorithm (AOA) its unique ability to find the global optimum for optimization problems used to test its performance. Several other mathematical operators exist with the same or better distributive properties, which can be exploited to enhance the performance of the newly proposed AOA. In this paper, we propose an improved version of the AOA called nAOA algorithm, which uses the high-density values that the natural logarithm and exponential operators can generate, to enhance the exploratory ability of the AOA. The addition and subtraction operators carry out the exploitation. The candidate solutions are initialized using the beta distribution, and the random variables and adaptations used in the algorithm have beta distribution. We test the performance of the proposed nAOA with 30 benchmark functions (20 classical and 10 composite test functions) and three engineering design benchmarks. The performance of nAOA is compared with the original AOA and nine other state-of-the-art algorithms. The nAOA shows efficient performance for the benchmark functions and was second only to GWO for the welded beam design (WBD), compression spring design (CSD), and pressure vessel design (PVD).
Article
Full-text available
One of the most crucial aspects of image segmentation is multilevel thresholding. However, multilevel thresholding becomes increasingly more computationally complex as the number of thresholds grows. In order to address this defect, this paper proposes a new multilevel thresholding approach based on the Evolutionary Arithmetic Optimization Algorithm (AOA). The arithmetic operators in science were the inspiration for AOA. DAOA is the proposed approach, which employs the Differential Evolution technique to enhance the AOA local research. The proposed algorithm is applied to the multilevel thresholding problem, using Kapur’s measure between class variance functions. The suggested DAOA is used to evaluate images, using eight standard test images from two different groups: nature and CT COVID-19 images. Peak signal-to-noise ratio (PSNR) and structural similarity index test (SSIM) are standard evaluation measures used to determine the accuracy of segmented images. The proposed DAOA method’s efficiency is evaluated and compared to other multilevel thresholding methods. The findings are presented with a number of different threshold values (i.e., 2, 3, 4, 5, and 6). According to the experimental results, the proposed DAOA process is better and produces higher-quality solutions than other comparative approaches. Moreover, it achieved better-segmented images, PSNR, and SSIM values. In addition, the proposed DAOA is ranked the first method in all test cases.
Article
Full-text available
In this paper, a new Multi-Objective Arithmetic Optimization Algorithm (MOAOA) is proposed for solving Real-World constrained Multi-objective Optimization Problems (RWMOPs). Such problems can be found in different areas, including mechanical engineering, chemical engineering, process and synthesis, and power electronics systems. MOAOA is inspired by the distribution behavior of the main arithmetic operators in mathematics. The proposed multi-objective version is formulated and developed from the recently introduced single-objective Arithmetic Optimization Algorithm (AOA) through an elitist non-dominance sorting and crowding distance-based mechanism. For the performance evaluation of MOAOA, a set of 35 constrained RWMOPs and five ZDT unconstrained problems are considered. For the fitness and efficiency evaluation of the proposed MOAOA, the results obtained from the MOAOA are compared with four other state-of-the-art multi-objective algorithms. In addition, five performance indicators, such as Hyper-Volume (HV), Spread (SP), Inverted Generational Distance (IGD), Runtime (RT), and Generational Distance (GD), are calculated for the rigorous evaluation of the performance and feasibility study of the MOAOA. The findings demonstrate the superiority of the MOAOA over other algorithms with high accuracy and coverage across all objectives. This paper also considers the Wilcoxon signed-rank test (WSRT) for the statistical investigation of the experimental study. The coverage, diversity, computational cost, and convergence behavior achieved by MOAOA show its high efficiency in solving ZDT and RWMOPs problems.
Article
Solving nonlinear equation systems using evolutionary algorithms involves solving two key problems. One problem is how to efficiently optimize the nonlinear equations derived from the physical features, and the other problems is how to locate more than one optimal solution in a single trial. To address these two problems, an improved dynamic tri-objective differential evolution method is proposed in this paper. First, we transform a given system with any type and number of nonlinear equations into a dynamic tri-objective optimization problem, which targets the first problem. Second, we develop a self-adaptive ranking multi-objective differential evolution, which targets the second problem. In addition, a probability distribution-based local search is introduced, which aims to identify the optimal solutions with a high level of accuracy. Based on previous studies of numerical optimizations with multiple solutions, each component is elaborately proposed and developed, so it is more suitable for a nonlinear equation system. Experiments were conducted on 30 numerical examples collected from real-world applications. The statistical results are encouraging, showing that the performance of the proposed approach is better than that of eight state-of-the-art evolutionary algorithms, with respect to root ratio and success rate metrics.
Article
In this paper, two-stage approaches are proposed to study damage detection, localization and quantification in Functionally Graded Material (FGM) plate structures. Metal and Ceramic FGM plates are considered using three different composite materials: Al/Al2O3, Al/ZrO2-1, and Al/ZrO2-2. The FGM plates are modelled using IsoGeometric Analysis (IGA), which is more efficient than the classical Finite Element Method (FEM). Using a power-law distribution of the volume fractions of the plate constituents, the material properties of the plates are expected to vary continuously through their thickness. Improved damage indicator based on Frequency Response Function (FRF) is employed to predict the damaged elements in the first stage. A robust and efficient Improved Artificial Neural Network using Arithmetic Optimization Algorithm (IANN-AOA) is implemented for damage quantification problem in the second stage. The main idea is based on eliminating the healthy elements from the numerical model by the improved indicator. Next, collected data from damaged element based on damage index of an improved indicator is used as input and damage level as output. To prove the robustness of IANN-AOA a Balancing Composite Motion Optimization (BCMO) is investigated to improve ANN (IANN-BCMO) and is used for comparison. The results show that the improved indicator can predict the damaged elements with high precision. For damage quantification, IANN-AOA provides more accurate results than IANN-BCMO.