ArticlePDF Available

Transit Search: An Optimization Algorithm Based on Exoplanet Exploration

Authors:

Abstract and Figures

In this article, a novel astrophysics-inspired meta-heuristic optimization algorithm, namely Transit Search (TS) is proposed based on a famous exoplanet exploration method. More than 3800 planets have been detected using transit technique by the database of the space telescopes. Transit is a method that has shown more potential than the second well-known successful method (radial velocity) with 915 discovered planets until 2022 March. It is difficult to detect the planets because of their small dimension in the cosmos scale. Due to the high efficiency of the transit method in astrophysics and its capabilities, it has been used to formulate an optimization technique for this research. In the transit algorithm, by studying the light received from the stars at certain intervals, the changes in luminosity are examined and if a decrease in the amount of the received light is observed, it indicates that a planet passes from the star front. In order to evaluate the capability of the proposed algorithm, 73 constrained and unconstrained problems are considered and the results have been compared with 13 well- known optimization algorithms. This set of examples includes a wide range of types of problems including mathematical functions (28 high-dimensional and 15 low-dimensional problems), CEC functions (10 problems), constrained mathematical benchmark problems (G01–G13), as well as 7 constrained engineering problems. The results indicated that the overall average error for the proposed algorithm is the lowest amount for the benchmark problems in comparison with the other efficient algorithms
Content may be subject to copyright.
Results in Control and Optimization 7 (2022) 100127
Contents lists available at ScienceDirect
Results in Control and Optimization
journal homepage: www.elsevier.com/locate/rico
Transit search: An optimization algorithm based on exoplanet
exploration
Masoomeh Mirrashid1,, Hosein Naderpour2
Faculty of Civil Engineering, Semnan University, Iran
ARTICLE INFO
Keywords:
Transit search
Optimization
Meta-heuristic
Astrophysics
Exoplanet exploration
ABSTRACT
In this article, a novel astrophysics-inspired meta-heuristic optimization algorithm, namely
Transit Search (TS) is proposed based on a famous exoplanet exploration method. More than
3800 planets have been detected using transit technique by the database of the space telescopes.
Transit is a method that has shown more potential than the second well-known successful
method (radial velocity) with 915 discovered planets until 2022 March. It is difficult to detect
the planets because of their small dimension in the cosmos scale. Due to the high efficiency
of the transit method in astrophysics and its capabilities, it has been used to formulate an
optimization technique for this research. In the transit algorithm, by studying the light received
from the stars at certain intervals, the changes in luminosity are examined and if a decrease
in the amount of the received light is observed, it indicates that a planet passes from the
star front. In order to evaluate the capability of the proposed algorithm, 73 constrained and
unconstrained problems are considered and the results have been compared with 13 well-
known optimization algorithms. This set of examples includes a wide range of types of problems
including mathematical functions (28 high-dimensional and 15 low-dimensional problems), CEC
functions (10 problems), constrained mathematical benchmark problems (G01–G13), as well as
7 constrained engineering problems. The results indicated that the overall average error for the
proposed algorithm is the lowest amount for the benchmark problems in comparison with the
other efficient algorithms
1. Introduction
Optimization means finding the best global response for an objective function (maximum or minimum value of the function)
in the search space. There are two main approaches to do this: classical or meta-heuristic methods. The first group, although they
can guarantee the optimal response, lose their efficiency in complex and large-scale problems. In some cases, using the classical
methods to determine the optimal response can take hundreds of years. Therefore, the second approach, which includes a set of
meta-heuristic methods, has been considered by researchers. Although these techniques do not guarantee finding the best response,
they can approximate the optimal and acceptable answers in a suitable time. Optimization is one of the most important issues
that has many applications in various sciences. For example, their application in the industrial internet of things [1], parameter
extraction of electrolyte fuel cell stacks [2], traffic light control [3], wireless networks [4], feature selection [5], cloud computing
environments [6], radial distribution network in the presence of plug-in electric vehicles [7], vulnerability assessment of RC frames
subject to seismic sequences [8], shape optimization of rotating electric machines [9], diagnostic accuracy transformer faults [10],
Corresponding author.
E-mail addresses: m.mirrashid@semnan.ac.ir (M. Mirrashid), naderpour@semnan.ac.ir (H. Naderpour).
1Postdoctoral Research Fellow
2Professor
https://doi.org/10.1016/j.rico.2022.100127
Received 13 January 2022; Received in revised form 29 March 2022; Accepted 9 April 2022
Available online 18 April 2022
2666-7207/©2022 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/by-nc-nd/4.0/).
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
thermal design of urbanized areas [11], image segmentation [12], and network reconfiguration in power systems [13] has been
evaluated by researchers. Optimization is also one of the main topics in machine learning methods, which are widely used in
literature (e.g. [1420]), since the process of determining the values for the unknown parameters of a model generally needs to an
optimization technique. Many algorithms proposed for optimization purposes are inspired by nature. These algorithms are either
based on Swarm-based Intelligence (SI) models (e.g. Cuckoo search algorithm [21]) or are not SI-based (e.g. differential evolution
algorithm [22]). Some nature-inspired algorithms are based on sciences such as physics and chemistry. Meanwhile, some algorithms
have been defined by implementing biological evolution processes inspired by nature [23]; the most famous of which is the Genetic
algorithm.
Two types of algorithms can be found in literature. The first one is the algorithms that presents an improvement for another
existing algorithm (e.g. [2431]) and the second one is the new algorithms (e.g. [32,33]). In some cases, a significant goal such
as shortest path problem [3436] or incremental software development [3739] has been considered. The existence of a large
number of optimization algorithms is one of the topics discussed in previous studies [4044]. Despite the number of algorithms,
their capabilities face to limitations and challenges. In fact, due to the factors and complexities expressed in solving an optimization
problem, it cannot be expected that an optimization algorithm will have the highest efficiency in solving all existing problems.
This was not unexpected according to ‘‘No free Lunch Theory’’ [45]. Therefore, in optimization problems, several factors such as
problem dimension, acceptable time, number of constraints, constraint types (continuous or non-continuous), can be important in
choosing the type of optimization technique.
Despite the well-known examples for evaluating the ability of an optimization technique, there are a limited number of algorithms
whose are capable to solve a wide range of the problems. Further, a suitable optimization algorithm should be able to use for solving
different types of problems, having capable strategies for exploration and exploitation, and presents acceptable results compare to
well-known algorithms. These are three important elements that should be considered. However, in the most of existing optimization
approaches, some of these elements are neglected due to simplicity, which obviously limits the applications of an algorithm.
Given the widespread applications of optimization in various sciences, the presentation of optimization algorithms is one of
the most up-to-date topics pursued by researchers, industries, engineers, and countries for different optimization goals. In the most
existing methods, a limited number of examples were reviewed and discussed. This can be seen in a variety of algorithms. Therefore,
the need to provide optimization algorithms that can be used to solve a wider range of problems is a serious challenge that is a
motivation of researchers to introduce a new optimization algorithm with high efficiency than the existing approaches. In this
article, an optimization algorithm inspired by the most powerful method of detecting planets (i.e. Transit) is presented. For this
purpose, in the next section, a background of the proposed algorithm is provided. Then, the theory and structure of the algorithm
are expressed in detail. In the following, sensitivity analysis is performed on the parameters of the algorithm, and the optimization
process of the algorithm is interpreted by several examples. In this article, a set of 73 benchmark problems are evaluated by the
Transit algorithm. Moreover, a comparison study of 13 well-known algorithms is also performed, and non-parametric studies of
time, error, and computational complexity of the algorithm are also presented. The rest of this article is organized as follows:In the
next section, the background of the transit search is provided. Section 3contains the framework of the proposed algorithm including
theory, flowchart, pseudocodes, sensitivity analysis, and equations, in detail. This section was ended by solving six mathematical
examples to show the optimizing process of the proposed algorithm. Numerical examples, including the results of 73 benchmark
problems can be found in Section 4. Section 5presents a discussion on the errors and algorithm complexity in comparison with a
number of well-known algorithms. The key findings and results were provided in the last section of this article.
2. Background
A galaxy is a collection of dust, gas, and billions of stars that are all held by gravity and revolves around its center of mass. It
also includes planets orbiting a host star as can be seen in the solar system. It is not expected that all of the regions of a galaxy be
habitable. For example, there is a black hole at the center of active galaxies and therefore, the life we know cannot exist around a
black hole. It is clear that the solar system as the place where we live is located in the life belt of the Milky Way galaxy.
Exoplanets are planets that are outside the solar system and orbit a star other than the sun. One of the challenges for physicists
is identifying exoplanets. This issue is very important for several reasons. By examining different planets, it will be clear that how
they formed, or how old they are, and what their material is. One of the most important goals of exoplanet exploration is to identify
viable planets. A planet must have special conditions to be a suitable host for life. In general, it can be said that the rocky nature
of the material, its mass, as well as the ability to retain water on the planet’s surface, are among the most important parameters
that are investigated in the study of a planet. Besides this, the most basic condition is that the planet should be in the habitat zone,
which is referred to the distance from the star in which liquid water is possible to exist on the planet. In other words, if the planet is
close to its star, the water evaporates. In contrast, if it is far away, the planet freezes. There are five techniques to detect exoplanets,
including radial velocity, transit, direct imaging, gravitational microlensing, and astrometry. A summary of each method can be
found in Table 1.
As shown in Table 1, the transit method with detection of more than 3800 planets is the most successful method to discover the
planets so far. In this technique, starlight observed by the space telescope is studied. By receiving the recorded information, changes
in the brightness of the star over time are evaluated. If a planet passes between the observer (telescope) and the host star, the star’s
brightness decreases by a very small amount. Using this approach, scientists identify planets with different properties. The transit
method is illustrated in Fig. 1 based on the brightness received from the star during the time. The period of the planet can also be
determined based on the number of passes. An example of the number of passes can be seen in Fig. 2, which shows a planet that
orbits the star every 120 days.
2
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Table 1
A summary of exoplanet detection techniques [46].
Method Basis of the method Planetsa
Radial velocity Changing the color of the starlight due to orbiting a planet that makes the star wobble 915
Transit Passing a planet between its host star and an observer reduces light received from the star 3845
Direct imaging Taking pictures of the planet by removing the overwhelming glare of the stars they orbit 58
Gravitational microlensing Bending and focusing the light of a star due to gravity as a planet passes between the star and earth 129
Astrometry Wobbling a star due to the orbit of a planet nearby stars 1
aNumber of planets detected until 2022 March.
Fig. 1. Schematic view of the Transit technique.
Source: The Artist’s concept of the star adopted
from Hahn [47].
Fig. 2. An example for the number of passes of a planet in front of its host star.
Many planets have been discovered so far. However, identifying planets alone is not important. The most important challenge is
to find planets that can host life. In general, it can be said that if a planet is rocky, having a suitable distance from the host star to
hold liquid water on the surface, having planets like Jupiter in their neighborhood so that its gravity can absorb those space rocks
that may have the possibility of a collision with the planet understudy, the existence of moon to create stable weather conditions
on the planet, as well as the rotation of the planet around its axis so as not to be gravitationally locked, it may consider as a planet
with the ability to be a host of life. However, these are some of the conditions that planets should have. Perhaps the most important
factor in identifying a planet as a place that can host life is its location. For this purpose, first, the location of the stellar system in
the galaxy, then the location of the planet itself, is examined to be in the life belt of the host galaxy and also, the stellar system
(Figs. 3 and 4). It is worth mentioning that the life belt (shown by green in the figures) is dependent on the galaxy and star types.
3
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Fig. 3. Habitat zone (green region) for different stars. (For interpretation of the references to color in this figure legend, the reader is referred to the web
version of this article.)
Fig. 4. Habitat zone (green region) of the Milky way galaxy . (For interpretation of the references to color in this figure legend, the reader is referred to the
web version of this article.)
Source: The Artist’s concept of the galaxy adopted from Budassi [48].
Fig. 5. Example of transit with different SN.
Although the discovery of viable planets is one of the most interesting topics in the scientific community, it should be noted that
the light we receive from stars is in some cases thousands or even millions of years old. Therefore, at the moment, some of them
may no longer exist as we see them by telescope. For example, due to running out of fuel, they have turned into a white dwarf
or a supernova due to their nuclear explosion. Also, the stars are very far away from us, and with current knowledge, interstellar
travel is not possible for humans. For example, the fastest human-made object (Voyager 1 spacecraft), which travels at a speed of
about 60,000 kilometers per hour, takes more than 80,000 years to reach out to the nearest star (Proxima Centauri). However, the
discovery of life-hosting planets is one of the most complex issues and interesting topics in astrophysics. The transit method, as a
successful method of discovering planets, is very important. Therefore, regardless of whether humans will one day be able to find
their way to other galaxies and stellar systems, the transit method can be used to determine the optimal solutions to the problems.
3. Transit search algorithm
3.1. Structure of the TS algorithm
In this section, details of the proposed algorithm are presented. The process of the TS is shown in Fig. 5. In the algorithm structure,
two parameters are defined: the number of host stars (𝑛𝑠) and the signal-to-noise ratio (SN). The SN parameter is determined based
on the transit model. Furthermore, the noise is estimated using the standard deviation of the observations obtained outside the
transit. In practice, there is always the possibility of noise in the photons received from stars images [49]. Fig. 5 shows an optional
transit for different amounts of signal-to-noise ratio. It should be noted that the product of the two parameters of the proposed
algorithm (ns and SN) is equal to the amount of initial population for TS.
In the following sub-sections, the implementation phases of the TS (see Fig. 6) and its relationships are described. Furthermore,
the sensitivity analysis on the ns and SN as well as the pseudo-code of the algorithm is presented in the last two sub-sections.
4
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Fig. 6. Flowchart of TS algorithm.
3.1.1. TS phases
There are five phases for implementing the TS, which include the phases of galaxy, star, transit, planet, neighbor, and
exploitation. In this section, details of each of these phases are provided.
3.1.1.1. Galaxy phase. The algorithm starts by selecting a galaxy. For this purpose, a random location in the search space is chosen
as the galaxy center. Once this location is determined, it is necessary to identify the habitable zones of the galaxy (life belt). To
do this, 𝑛𝑠*SN random regions 𝐿𝑅, are evaluated by Eqs. (1) to (3) to find the situations having the potential for the best stellar
systems (the regions with a high probability of the host of life). Finally, 𝑛𝑠of them that have the best fitness are selected. The
selected regions have the potential to be the host of life, and the next steps of the algorithm begin with these regions.
𝐿𝑅,𝑙 =𝐿𝐺𝑎𝑙𝑎𝑥𝑦 +𝐷𝑁 𝑜𝑖𝑠𝑒 𝑙 = 1,,(𝑛𝑠×𝑆𝑁 )(1)
In which,
𝐷=𝑐1𝐿𝐺𝑎𝑙𝑎𝑥𝑦 𝐿𝑟𝑖𝑓 𝑧 = 1 (𝑁 𝑒𝑔𝑎𝑡𝑖𝑣𝑒 𝑅𝑒𝑔 𝑖𝑜𝑛)
𝑐1𝐿𝐺𝑎𝑙𝑎𝑥𝑦 +𝐿𝑟𝑖𝑓 𝑧 = 2 (𝑃 𝑜𝑠𝑖𝑡𝑖𝑣𝑒 𝑅𝑒𝑔𝑖𝑜𝑛)(2)
𝑁𝑜𝑖𝑠𝑒 =𝑐23𝐿𝑟(3)
In the equations mentioned above, 𝐿𝐺𝑎𝑙𝑎𝑥𝑦 represents the center location of the galaxy. Also, 𝐿𝑟is a random location in the
search space. There are two coefficients between 0 to 1 which indicate a random number (𝑐1) and a random vector (𝑐2) with the
size of the number of variables for the optimization problem.
5
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Fig. 7. An example for the selected regions by TS .
Source: The Artist’s concept of the galaxy adopted from Budassi [48].
Fig. 8. The selection process of the stellar systems by TS .
Source: The Artist’s concept of the galaxy adopted from Budassi [48].
Parameter Dis taken as a difference between the situation under study and the center of the galaxy to show the difference in the
situation of the study area. This area can be located on the front of the central region (positive part) or the back of the central region
(negative part) of the galaxy (see Fig. 7). The zone parameter (z) here is a random number that is equal to 1 or 2. Also, to increase
the accuracy of positioning, it is necessary to remove the noise related to the information collected from the received signals. For
this purpose, the Noise parameter is used. It is clear that the amount of noise cannot be much different from the intended situations.
Therefore, a coefficient 𝑐2with a power of 3 has been used to reduce its computational value.
In the next step, from each of the selected regions, a star, which is corresponded to a stellar system, must be chosen using Eqs.
(4) to (6). Therefore, at the end of this stage, there are ns stars to search by the algorithm. The location of the stars is shown by 𝐿𝑠
in Eq. (4). The coefficients 𝑐3and 𝑐4in these equations are random numbers between 0 and 1, and the coefficient 𝑐5is a random
vector between 0 and 1. In Fig. 8, the selection process of the stellar systems is illustrated.
6
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Table 2
The meaning of star properties in TS.
Definition Symbol in TS Ls 𝑓𝑠
First meaning 𝑀1Location Fitness
Second meaning 𝑀2Signal properties Brightness
Fig. 9. Star ranking for example with eight stars (minimization goal).
𝐿𝑆,𝑖 =𝐿𝑅,𝑖 +𝐷𝑁 𝑜𝑖𝑠𝑒 𝑖 = 1,, 𝑛𝑠(4)
In which,
𝐷=𝑐4𝐿𝑅,𝑖 𝑐3𝐿𝑟𝑖𝑓 𝑧 = 1 (𝑁 𝑒𝑔𝑎𝑡𝑖𝑣𝑒 𝑅𝑒𝑔 𝑖𝑜𝑛)
𝑐4𝐿𝑅,𝑖 +𝑐3𝐿𝑟𝑖𝑓 𝑧 = 2 (𝑃 𝑜𝑠𝑖𝑡𝑖𝑣𝑒 𝑅𝑒𝑔𝑖𝑜𝑛)(5)
𝑁𝑜𝑖𝑠𝑒 =𝑐53𝐿𝑟(6)
In the proposed algorithm, the galaxy phase is executed only once, before starting the iterations. The purpose of this phase is to
select the appropriate situations to perform the main stages of the algorithm (phases 2 to 5). Pseudocode of the Galaxy phase can
be seen in Algorithm 1.
3.1.1.2. Transit phase. In order to detect the transit, it is necessary to re-measure the amount of light received from the star to detect
its possible reduction in the received light signals. In the TS algorithm, 𝐿𝑆and its corresponded fitness (𝑓𝑆) have two meanings
(𝑀1and 𝑀2) which are presented in Table 2. In cases where the goal is to use the location of the star to determine and update the
location of a planet, 𝑀1is used. In cases where the goal is to determine the brightness received from the star and update it, 𝑀2is
used. Accordingly, in the case of 𝑀2a change in 𝐿𝑆means a new specification of the light signal, while in the case of 𝑀1, a change
in 𝐿𝑆means a change in the location of the star.
The luminosity of a star (or absolute brightness) is its intrinsic brightness, which deals with the amount of energy that a star
radiates per second. Besides this, the apparent brightness of the star is how bright it appears to the observers. In astronomy, stars
are classified into seven main groups according to their luminosity and temperature [50,51]. The largest and brightest stars have
blue light (first group). The luminosity of the stars in this group is more than 30,000 times of the Sun. Also, the faintest stars, which
have much lower temperatures, lease massive and luminosity than the first group, have red light with luminosity less than 8% of
the sun [50,51]. Luminosity is an important parameter that is directly used to calculate the habitat zone of stars (see Fig. 4-b).
Therefore, astronomers consider classes in order to find planets that life is possible around the star.
In the TS algorithm, it is necessary to specify the stars class. For this purpose, the brightness of each star is considered using the
definition of 𝑀2.Fig. 9 illustrated a search space for an example with eight stars and their ranks for minimization goal (best fitness
have the lower brightness and therefore, its rank is 1). It is worth mentioning that the star with the highest brightness (the eighth
star in this example) has the highest fitness value of the objective function. The luminosity of a star can be estimated based on the
spectrum of light (star class) received by the observer (telescope) and the distance of the star from the observer. It is clear that
the small distance causes to receive more photons. Accordingly, in the proposed algorithm, the luminosity of the star is obtained
approximately from Eq. (7):
𝐿𝑖=𝑅𝑖𝑛𝑠
𝑑𝑖2𝑖= 1,, 𝑛𝑠 𝑅𝑖∈ {1,, 𝑛𝑠}(7)
𝑑𝑖=𝐿𝑆𝐿𝑇2𝑖= 1,, 𝑛𝑠 (8)
in which, 𝐿𝑖and 𝑅𝑖are the luminosity and rank of the star i. Also, 𝑑𝑖(Eq. (8)) deals with the distance between the telescope and the
star i. The location of the telescope, 𝐿𝑇, is selected randomly at the start of the algorithm and does not change during optimization.
In order to update the light received from the star, the new signal is received by changing the value of 𝐿𝑆using the definition
of 𝑀2.For this purpose, Eqs. (9) to (11) are used. The coefficients 𝑐6and 𝑐7are a random number between −1 to 1 and a random
vector between 0 and 1, respectively.
𝐿𝑆,𝑛𝑒𝑤,𝑖 =𝐿𝑆 ,𝑖 +𝐷𝑁𝑜𝑖𝑠𝑒 𝑖 = 1,, 𝑛𝑠(9)
7
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
In which,
𝐷=𝑐6𝐿𝑆,𝑖 (10)
𝑁𝑜𝑖𝑠𝑒 =𝑐73𝐿𝑆(11)
Finally, the amount of brightness of the star is calculated (the obtained 𝑓𝑠using the new 𝐿𝑆,𝑛𝑒𝑤 ), and accordingly, the amount
of new luminosity, 𝐿𝑖,𝑛𝑒𝑤 is determined by Eq. (12).
𝐿𝑖,𝑛𝑒𝑤 =𝑅𝑖,𝑛𝑒𝑤𝑛𝑠
𝑑𝑖,𝑛𝑒𝑤2𝑖= 1,, 𝑛𝑠 (12)
8
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
The parameter 𝑑𝑖,𝑛𝑒𝑤 can be calculated using the new 𝐿𝑆and the location of the telescope. By comparing 𝐿𝑖with 𝐿𝑖,𝑛𝑒𝑤 the possibility
of transit can be determined. This probability, 𝑃𝑇, denoted by 1 (probability of transit) and zero (non-transit), is specified as based
on Eq. (13). If 𝑃𝑇= 1, the planet phase is used, otherwise, the neighbor phase is implemented in the current iteration. Algorithm 2
shows the pseudocode of the transit phase.
𝐼𝑓 𝐿𝑖,𝑛𝑒𝑤 < 𝐿𝑖𝑃𝑇= 1 (𝑇 𝑟𝑎𝑛𝑠𝑖𝑡)
𝐼𝑓 𝐿𝑖,𝑛𝑒𝑤 𝐿𝑖𝑃𝑇= 0 (𝑁𝑜 𝑇 𝑟𝑎𝑛𝑠𝑖𝑡)(13)
3.1.1.3. Planet phase. By specifying the value of 𝑃𝑇in the previous phase, if the transit is observed (𝑃𝑇= 1), the planet phase is
implemented in the TS algorithm. In this phase, first, the initial location of the detected planet is determined. Given that the light
received by the observer (telescope) is received from the star, so a decrease in the amount of this light (occurrence of transits)
occurs when the planet has passed between the star and the telescope (Fig. 10). Based on this, the initial location of the detected
planet (𝐿𝑧) can be determined. In the TS algorithm, this is done by Eq. (14).
𝐿𝑧= (𝑐8𝐿𝑇+𝑅𝐿𝐿𝑆,𝑖 )∕2 𝑖= 1,, 𝑛𝑠(14)
In which,
𝑅𝐿=𝐿𝑆,𝑛𝑒𝑤,𝑖 𝐿𝑆,𝑖 (15)
The parameter 𝑅𝐿represents the luminance ratio (calculated by Eq. (15)). The coefficient 𝑐8also has a random value between 0
and 1. In Eq. (14), using the average of the two relative locations of the star and the telescope, the situation of the planet whose
current location is between the star and the telescope is determined.
As mentioned earlier, the signal-to-noise ratio (SN) is one of the most important parameters in confirming transit and reducing
the impact of noise. By determining the approximate situation of the planet, the number of received signals is examined to specify
the location of the planet in its stellar system. In the TS algorithm, a number of SN signals are considered for this purpose (Eq. (16)).
The coefficient 𝑐9in this equation is a random number between 1 and 1. Also, 𝑐10 is a random vector having values between 1
and 1. After determining signals (𝐿𝑚), the final location of the detected planet (𝐿𝑃) is corrected by taking the average of SN signals
using Eq. (17).
𝐿𝑚,𝑗 =
𝐿𝑧+𝑐9𝐿𝑟𝑖𝑓 𝑧 = 1 for Aphelion region
𝐿𝑧𝑐9𝐿𝑟𝑖𝑓 𝑧 = 2 𝑗= 1,, 𝑆𝑁 for Perihelion region
𝐿𝑧+𝑐10𝐿𝑟𝑖𝑓 𝑧 = 3 for Neutral region
(16)
𝐿𝑃=𝑆𝑁
𝑗=1 𝐿𝑚,𝑗
𝑆𝑁 (17)
In astronomy, the farthest and closest distances of a planet (like the Earth) from its host star (like the Sun) are called the Aphelion and
Perihelion, respectively. In order to consider the orbital position of the planet in the TS algorithm three zones including Aphelion,
Perihelion, and Neutral regions (the zone between Aphelion and Perihelion regions) are considered and affected by applying the
zone parameter (z) in the planet phase (see Eq. (16) and Fig. 11). The value of this parameter is a random number of 1, 2, or 3. The
pseudocode of the current phase can be seen in Algorithm 3. In each iteration of the algorithm, if the detected planet is better than
the previously discovered planet in the stellar system under study (better conditions for life), the location of this planet is saved.
Therefore, in the TS algorithm, there is only one planet (the best planet) for each ns star.
9
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Fig. 10. Transit observed by the space telescope (The detected planet is placed between the telescope and the star).
Source: The Artist’s concept of the star adopted from Hahn [47].
Fig. 11. The orbit of a planet around the star and the corresponded zones in the algorithm.
Source: The Artist’s concept of the star adopted from Hahn [47]
3.1.1.4. Neighbor phase. If there is no transit for a star in the current observation, the neighborhood planets of the previously
detected planet for the star will be studied. In other words, if the neighbor has better conditions than the current planet (it has
better conditions to host life), it will be replaced with the current planet of the star. This is done in the TS algorithm in the neighbor
phase using Eqs. (17) to (19). First, the initial location of the neighbor (𝐿z) is estimated using Eq. (18) with consideration of its
host star (𝐿𝑆,𝑛𝑒𝑤 ) and a random location (𝐿𝑅). Then, the final location of the neighbor planet (𝐿𝑁) is determined by Eqs. (19) and
(20). The coefficients 𝑐11 and 𝑐12 in Eq. (18) deal with a random number between 0 and 1. Also, the coefficients 𝑐13 and 𝑐14 in
Eq. (18) are a random number and a vector between −1 and 1, respectively. The pseudocode of the neighbor phase is presented in
Algorithm 4.
10
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
𝐿𝑧= (𝑐11𝐿𝑠,𝑛𝑒𝑤 +𝑐12 𝐿𝑟)∕2 (18)
𝐿𝑛,𝑗 =
𝐿𝑧𝑐13𝐿𝑟𝑖𝑓 𝑧 = 1 for Aphelion region
𝐿𝑧+𝑐13𝐿𝑟𝑖𝑓 𝑧 = 2 𝑗= 1,, 𝑆 𝑁 for Perihelion region
𝐿𝑧+𝑐14𝐿𝑟𝑖𝑓 𝑧 = 3 for Neutral region
(19)
𝐿𝑁,𝑖 =𝑆 𝑁
𝑗=1 𝐿𝑛,𝑗
𝑆𝑁 (20)
3.1.1.5. Exploitation phase. In the previous phases, the best planet is determined for each star. As mentioned earlier, discovering
a planet alone does not matter. In fact, it is necessary to study the characteristics of the planet and the conditions to host life. In
the TS algorithm, this is done in the Exploitation phase. In this phase, a new definition for the 𝐿𝑃is expressed. In other words,
𝐿𝑃in the current phase (𝐿𝐸) refers to the characteristics of the planet (such as its density, materials, atmosphere, etc.). Then, by
adding new knowledge (K), the final characteristics of the planet are modified SN times (𝑗= 1,. . . , SN ) using Eqs. (21) and (22).
In this equation, 𝑐15 is a random number between 0 and 2, and 𝑐16 is a random number between 0 and 1. Also, 𝑐17 is a random
vector between 0 and 1. The parameter Pin the Eq. (22) indicates a random power between 1 and (𝑛𝑠*SN). In this equation, 𝑐𝑘
is a random number (1, 2, 3, or 4) and indicates the knowledge index. Four states are considered in TS for 𝑐𝑘that can be seen in
Table 3. In this table, ‘‘reliable’’ means the confidence amount in the information collected for the planet. The best planet for each
star can be found as the best 𝐿𝐸in this phase.
Table 3
The considered states for the exploitation phase of the TS algorithm.
The global solution of the algorithm is the best planet ever between the whole 𝑛𝑠detected planets. Algorithm 5 shows the
pseudocode of the Exploitation Phase.
𝐿𝐸,𝑗 =
𝑐16𝐿𝑃+𝑐15 𝐾 𝑖𝑓 𝑐𝑘= 1 (State 1)
𝑐16𝐿𝑃𝑐15 𝐾 𝑖𝑓 𝑐𝑘= 2 (State 2)
𝐿𝑃𝑐15𝐾 𝑖𝑓 𝑐𝑘= 3 (State 3)
𝐿𝑃+𝑐15𝐾 𝑖𝑓 𝑐𝑘= 4 (State 4)
(21)
In which,
𝐾=𝑐17𝑃𝐿𝑟(22)
11
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
3.2. Pseudocode for TS algorithm
In the previous sections, the phases of the proposed algorithm, definitions, and details of each of them, along with the relevant
relationships and pseudocodes, were presented. To better express the implementation process of the TS, the general pseudocode is
also presented here which can be seen in Algorithm 6.
3.3. Examples for the optimization process of the TS algorithm
In order to evaluate the performance of the proposed algorithm and show the optimization process in TS, six benchmark functions
have been selected. Each of these functions is on behalf of a set of functions with six different forms, including many local minima,
bowl-shaped, steep ridges/drops, valley-shaped, plate-shaped, and free-shaped. The specifications of the selected functions are
presented in Table 4.
Table 4
Information of the selected functions.
Number Function Shape Dimension Goal
X1X2Fitness
1 Ackley Many local minima 2 0 0 0
2 Bohachevsky Bowl-shaped 2 0 0 0
3 De-Jong 5 Steep ridges/drops 2 32 32 0.998
4 Rosenbrock Valley-shaped 2 1 1 0
5 Styblinski-Tang Free-shaped 2 2.903 2.903 78.332
6 Zakharov Plate-shaped 2 0 0 0
The process of the algorithm for one of the above six functions with consideration of three dimensions is shown in Fig. 12. Based
on the figure, first, the location of the center of the galaxy is determined and then, its habitable zones are determined. Next, a
number of stars (5 stars in this example) are selected from the best regions of the previous step, for starting the optimization. At
the end of each iteration, the best planets (one planet per star) are stored and used for the next iteration of the algorithm. In the
mentioned figure, the parameter it means the current iteration number.
A number of five host stars have been selected to run the TS algorithm in this section. Also, the signal-to-noise value, SN, and the
number of iterations are 10 and 30, respectively. Figs. 13 to 15 show the optimization process of the selected functions. For each
function, the shape of the function is first shown by considering a two-dimensional space (𝑋1and 𝑋2). Then, the possible search
space and the best planets (one planet per star) found in this space can be seen in the figures for each of the 30 iterations of the
algorithm. In the proposed algorithm, the initial location of the telescope is randomly selected in the search space. The observation
or non-observation of transit is determined for each star in each iteration of the algorithm. Accordingly, the best planet for each of
the host stars is identified in each iteration and its location is updated during the algorithm process.
3.4. Sensitivity analysis of the TS parameters
Previous studies indicated that different values of the parameters may be optimal at different stages of the optimization process
of an algorithm [52]. Therefore, in this section, the sensitivity analysis of two parameters of the TS algorithm (𝑛𝑠and SN) as well as
12
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Fig. 12. An example for optimization process of Ackley function.
Table 5
Fitness values obtained from the sensitivity analyses of 𝑛𝑠.
Function 𝑛𝑠
5 10 15 20 25 30 35 40 45 50
Ackley 8.88E16 8.88E16 8.88E16 8.88E16 8.88E16 8.88E16 8.88E16 8.88E16 8.88E16 8.88E16
Bohachevsky 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00
De-Jong 5 1.03E+00 1.02E+00 1.08E+00 9.98E01 1.17E+00 1.10E+00 1.07E+00 1.10E+00 1.03E+00 1.04E+00
Rosenbrock 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00
Styblinski-Tang 5.52E03 3.97E03 3.38E03 2.31E03 1.19E03 1.48E03 7.45E04 8.18E04 3.87E04 7.72E04
Zakharov 3.33E46 5.25E74 4.78E95 8.65E114 2.42E129 3.83E143 8.10E154 2.63E165 5.75E171 7.47E183
the number of iterations (𝑛𝐼𝑡 ) is evaluated. For this purpose, each parameter is changed, while the other parameters have a constant
value. These values are equal to 5, 10 for 𝑛𝑠and SN respectively. The number of iterations for the sensitivity analysis of the 𝑛𝑠and
SN is 100 and a number of 30 runs is considered. The results are shown in Fig. 16 and Tables 5 to 7for the mean values of the
whole runs.
As can be seen from the figure, increasing the number of 𝑛𝑠and also SN has been able to improve the accuracy of the algorithm
in some cases (e.g. Rosenbrock function). Achieving such a result is not unexpected especially for SN, because with increasing SN,
more locations in the search space are examined by the algorithm.
The speed of convergence to achieve the best response is very important in different algorithms and optimization methods.
Therefore, in this section, the performance of the proposed algorithm for convergence and determining the optimal value of the
objective function is evaluated. To do this, the values of the parameters 𝑛𝑠and SN are considered equal to 5 and 10, respectively.
Then, the optimal value was determined by the algorithm for the different numbers of iterations. As shown in Fig. 16, the TS
algorithm has a suitable speed and can provide the desired response in a reasonable number of iterations.
4. Numerical study and comparison
In this section, the performance of the TS algorithm in various benchmark problems is examined and compared with some of
the most famous optimization algorithms. For this purpose, a set of unconstrained and constrained problems was evaluated. Details
of problems and algorithms are provided in each section. A system with 8 GB Ram and CPU Intel Quad-Core 1.4 GHz was used to
run the algorithms in this article. The software to run the algorithms was MATLAB version 2020b on a Macintosh operating system.
13
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Fig. 13. The optimization process for Ackley and Bohachevsky functions.
Table 6
Fitness values obtained from the sensitivity analyses of SN.
Function SN
5 10 15 20 25 30 35 40 45 50
Ackley 1.82E11 8.88E16 8.88E16 8.88E16 8.88E16 8.88E16 8.88E16 8.88E16 8.88E16 8.88E16
Bohachevsky 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00
De-Jong 5 1.41E+00 1.20E+00 1.03E+00 1.06E+00 9.98E01 9.98E01 9.98E01 9.98E01 1.06E+00 9.98E01
Rosenbrock 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00
Styblinski-Tang 1.86E02 1.97E02 1.02E02 1.11E02 6.47E03 3.21E03 4.77E03 1.53E03 3.87E03 2.90E03
Zakharov 1.96E23 1.63E46 2.13E68 2.74E88 8.90E114 2.74E136 1.59E159 1.54E182 7.51E202 2.79E227
14
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Fig. 14. The optimization process for De-Jong and Rosenbrock functions.
Table 7
Fitness values obtained from the sensitivity analyses of 𝑛𝐼𝑡 .
Function 𝑛𝐼𝑡
20 40 60 80 100 120 140 160 180 200
Ackley 1.85E08 8.32E12 1.13E14 1.01E15 8.88E16 8.88E16 8.88E16 8.88E16 8.88E16 8.88E16
Bohachevsky 9.18E14 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00
De-Jong 5 6.39E+00 3.12E+00 3.24E+00 2.32E+00 1.91E+00 1.57E+00 1.49E+00 1.44E+00 1.13E+00 1.13E+00
Rosenbrock 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00
Styblinski-Tang 1.97E01 1.04E01 5.98E02 4.52E02 6.29E02 2.63E02 1.50E02 2.45E02 1.79E02 6.68E03
Zakharov 4.30E18 2.90E24 1.53E29 4.79E34 1.28E36 2.28E39 8.42E41 3.32E43 9.89E44 3.11E46
In this article, thirteen algorithms for the comparison study were performed. These algorithms include Genetic (GA), particle
swarm optimization (PSO) [53], differential evolution (DE) [22], harmony search (HS) [54], imperialist competitive algorithm
(ICA) [55], artificial bee colony (ABC) [56], cuckoo search (CS) [21], bat-inspired algorithm (BA) [57], teaching–learning-based
15
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Fig. 15. The optimization process for Styblinsky and Zakharov functions.
optimization (TLBO) [58], grey wolf optimizer (GW) [59], whale optimization (WO) [60], salp swarm (SS) [61], Harris hawks
optimization (HHO) [62]. Details of the parameters used for each of the algorithms considered in this section are presented in
Table 8. In the process of optimizing the unconstrained problems, 500 iterations were used, while for the constrained problems,
the number of iterations for each algorithm was set to be evaluated 200 000 times. It is worth mentioning that, all of the provided
results in this section were obtained based on 30 runs.
4.1. Optimization of unconstrained problems
In this section, a suite of benchmark mathematical functions was used. This set includes 43 functions, 28 of which have
been examined in the section of small-dimensional functions, and the remaining 15 functions have been studied in the section
of large-dimensional problems. The above functions include six different forms, including many local minima, bowl-shaped,
16
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Fig. 16. Sensitivity Results (the mean results for the 30 runs).
steep ridges/drops, valley-shaped, plate-shaped, and free-shaped. Details of the functions are presented in each section. The
equations of the considered functions and their descriptions are available online at https://www.sfu.ca/~ssurjano by Surjanovic
and Bingham [63].
4.1.1. Small-dimensional problems (F1 to F28)
In this section, 28 benchmark functions (F1 to F28) with dimensions 1 to 6 have been evaluated by the considered algorithms
using 30 runs. The definition and details of these functions are given in Table 9. Also, a summary of the objective function values
can be seen in Table 10. Based on the results, it is clear that the proposed algorithm has performed well in optimizing and providing
the values close to the goal best response. This result can be well seen in Figs. 17 and 18. The electronic supplementary material
file attached to this article (Sup. 1) provides more details of the results of this section, including the standard deviation (STD) of
the objective function values, the average execution time, and its standard deviation.
4.1.2. Large-dimensional problems (F29 to F43)
In order to evaluate the performance of the algorithm for large-scale problems, 15 benchmark functions (F29 to F43 in Table 11)
were considered. Dimensions intended for study in this section were 30, 100, 500, and 1000. A number of 30 implementations have
been performed for each of the problems, algorithms, and dimensions. The results (Fig. 19) show that the proposed algorithm has
good performance in estimating the optimal response for large-scale problems. A summary of the results for all the algorithms
implemented in this section is provided in Tables 12 to 15. Further details of the results, including the mean values obtained for the
objective function, execution time, and standard deviation, are available for further evaluation in the supplementary file attached
to this article (Sup. 1).
4.1.3. CEC problems (C1 to C10)
In this section, a collection of 10 functions from CEC 2019 [64,65] including CEC01 to CEC10 were used to evaluate the
performance of the proposed algorithm. More information about details of these functions can be found in their references. Based
on the results obtained from 30 implementations (Table 16) for the functions considered in this section (namely C1 to C10, the
performance of the algorithm was appropriate compared to other algorithms. The standard deviation values can be seen in Sup. 1.
17
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Table 8
The considered values for the parameters of the algorithms.
Number Algorithm Parameter Value
1 TS Number of host stars
Signal-to-noise ratio
5
10
2 GA Number of populations
Percent of crossover
Percent of mutation
50
0.1
0.9
3 PSO Swarm size
Inertia weight
Inertia weight damping ratio
50
0.73
1
4 DE Population size
Lower bound of scaling factor
Upper bound of scaling factor
Crossover probability
50
0.2
0.8
0.2
5 HS Harmony memory size
Harmony memory consideration rate
Pitch adjustment rate
Fret width
Fret width damp
50
0.75
0.05
0.1
0.95
6 ICA Number population
Number imperialist
zeta
beta
50
10
0.1
2
7 ABC Number of bees 50
8 CS Number of nests
Discovery rate of alien eggs
50
0.25
9 BA Loudness
Pulse rate
Frequency minimum
Frequency maximum
Number of bats
0.5
0.5
0
2
50
10 TLBO Teaching factor
Number of populations
{1,2}
50
11 GW Number of wolfs 50
12 WO Number of whales 50
13 SS Number of salps 50
14 HHO Number of hawks 50
4.2. Optimization of the constrained problems
In general, there are two types of optimization problems, which include unconstrained and constrained problems [66]. In the
unconstrained type, the optimization algorithm tries to find the lowest (target: minimization) or maximum (target: maximization)
possible value for an objective function. In the second type of problem, there is a more complex process since the best solution
must satisfy the conditions (constraints) defined for the problem. In other words, the solution should be feasible. In this article, 20
constrained benchmark problems were optimized by the TS algorithm (and 13 other algorithms). As mentioned earlier, the number
of iterations of each algorithm was adjusted so that for each of the problems in this section, 200,000 times the objective function
was evaluated by each algorithm. Similar to the previous sections, each problem was executed 30 times, and the mean values of
the objective function are presented and results are evaluated in comparison with other algorithms. More details can be seen in the
next sub-sections.
4.2.1. Handling-technique
In optimizing the constrained problems, a method of handling technique is needed to apply the constraints conditions in the
optimization process. There are several methods in literature to handle the constraints [23,6770]. In this article, the technique
of applying a penalty to the value of the objective function is used. For this purpose, Eq. (23) to Eq. (26) were used. In these
equations, the parameters f(x) and C(x) are the objective functions and the constrain values, respectively. Also, g(x), h(x) are the
constrain values for the inequality and equality constraints. The solution vector here is x. Furthermore, jis the number of the whole
constraints, qis the number of the inequality constraints, and mis the number of the equality constraints. The parameter 𝛿deals
with a small tolerance (1E-04 in this research).
𝐹 𝑖𝑡𝑛𝑒𝑠𝑠 =𝑓(𝑥)+ 1020
𝑗
1
(𝐶𝑗)2(23)
In which,
𝐶𝑗(𝑥) = max 0, 𝑔𝑗(𝑥), 𝑖𝑓 1𝑗𝑞
max 0,𝑗(𝑥)𝛿, 𝑖𝑓 𝑞 + 1 𝑗𝑚(24)
18
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Table 9
The considered problems with small dimensions.
Name in this article Name of the function Dimension Bound Global minima
F1 Beale 2 [4.5 4.5] 0
F2 Bohachevsky function 1 2 [100 100] 0
F3 Branin 2 X1[5 10]
X2[0 15]
0.3979
F4 Bukin function n.6 2 X1[15 3]
X2[5 3]
0
F5 Three-hump camel 2 [5 5] 0
F6 Colville 4 [10 10] 0
F7 Cross-in-tray 2 [10 10] 2.0626
F8 De jong function n.5 2 [65.536 65.536] 0.998
F9 Drop-wave 2 [5.12 5.12] 1
F10 Easom 2 [100 100] 1
F11 Eggholder 2 [512 512] 959.641
F12 Forrester 1 [0 1] 6.021
F13 Goldstein-price 2 [2 2] 3
F14 Gramacy and Lee 1 [0.5 2.5] 0.869
F15 Hartmann 3-dimensional 3 [0 1] 3.8628
F16 Hartmann 4-dimensional 4 [0 1] 3.1355
F17 Hartmann 6-dimensional 6 [0 1] 3.3224
F18 Holder table 2 [10 10] 19.2085
F19 Langermann 2 [0 10] 4.1558
F20 Mccormick 2 X1[1.5 4]
X2[3 4]
1.9133
F21 Matyas 2 [10 10] 0
F22 Michalewicz 2 [0 𝜋]1.8013
F23 Perm function 0 2 [2 2] 0
F24 Perm function 2 [2 2] 0
F25 Power sum 4 [0 4] 0
F26 Schaffer function n.2 2 [100 100] 0
F27 Schaffer function n.4 4 [100 100] 0.2926
F28 Six-hub camel 2 X1[3 3]
X2[2 2]
1.0316
Table 10
The average results of the algorithms for F19 to F28.
Function Goal TS HHO TLBO PSO GA DE ABC BA CS HS GW ICA SS WO
F1 0.00E+00 2.19E10 3.15E12 0.00E+00 5.08E02 2.36E05 2.13E32 6.03E07 7.62E02 6.32E18 1.15E07 2.20E03 3.40E09 4.19E15 1.55E10
F2 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 3.16E07 0.00E+00 0.00E+00 2.18E01 0.00E+00 0.00E+00 0.00E+00 0.00E+00 2.33E11 0.00E+00
F3 3.98E01 3.98E01 3.98E01 3.98E01 3.98E01 3.98E01 3.98E01 3.98E01 3.98E01 3.98E01 3.98E01 3.98E01 3.98E01 3.98E01 3.98E01
F4 0.00E+00 1.75E02 4.43E02 5.01E02 1.60E02 5.07E02 2.19E01 2.10E01 2.26E02 3.97E02 2.24E01 2.15E02 8.09E03 2.12E02 3.89E02
F5 0.00E+00 1.17E26 1.03E112 4.19E188 3.62E47 3.83E10 1.65E95 1.43E20 3.98E02 5.89E23 1.44E233 6.73E25 5.95E23 2.92E15 9.20E95
F6 0.00E+00 4.58E02 2.59E04 1.10E05 5.18E03 1.49E+00 1.71E02 8.57E01 1.70E32 9.22E03 1.10E+00 8.32E01 1.47E01 1.37E+00 1.68E+00
F7 2.06E+00 2.06E+00 2.06E+00 2.06E+00 2.06E+00 2.06E+00 2.06E+00 2.06E+00 2.06E+00 2.06E+00 2.06E+00 2.06E+00 2.06E+00 2.06E+00 2.06E+00
F8 9.98E01 9.98E01 1.10E+00 9.98E01 2.87E+00 2.55E+00 9.98E01 9.98E01 1.03E+01 9.98E01 2.51E+00 9.98E01 9.98E01 1.06E+00 1.59E+00
F9 1.00E+00 9.98E01 1.00E+00 1.00E+00 9.98E01 9.94E01 1.00E+00 1.00E+00 9.43E01 1.00E+00 9.98E01 9.55E01 9.98E01 1.00E+00 9.79E01
F10 1.00E+00 1.00E+00 1.00E+00 1.00E+00 1.00E+00 1.00E+00 1.00E+00 1.00E+00 9.33E01 1.00E+00 1.00E+00 9.00E01 1.00E+00 1.00E+00 1.00E+00
F11 9.60E+02 9.51E+02 9.36E+02 9.57E+02 8.27E+02 9.12E+02 9.60E+02 9.60E+02 7.95E+02 9.60E+02 9.04E+02 9.36E+02 9.55E+02 9.53E+02 9.55E+02
F12 6.02E+00 6.02E+00 6.02E+00 6.02E+00 6.02E+00 6.02E+00 6.02E+00 6.02E+00 6.02E+00 6.02E+00 6.02E+00 6.02E+00 6.02E+00 6.02E+00 6.02E+00
F13 3.00E+00 3.00E+00 3.00E+00 3.00E+00 3.00E+00 3.00E+00 3.00E+00 3.00E+00 3.00E+00 3.00E+00 3.00E+00 3.00E+00 3.00E+00 3.00E+00 3.00E+00
F14 8.69E01 8.69E01 8.69E01 8.69E01 8.69E01 8.69E01 8.69E01 8.69E01 8.69E01 8.69E01 8.69E01 8.69E01 8.69E01 8.69E01 8.69E01
F15 3.86E+00 3.86E+00 3.86E+00 3.86E+00 3.86E+00 3.86E+00 3.86E+00 3.86E+00 3.86E+00 3.86E+00 3.86E+00 3.86E+00 3.86E+00 3.86E+00 3.86E+00
F16 3.14E+00 3.13E+00 3.10E+00 3.13E+00 3.05E+00 3.13E+00 3.13E+00 3.13E+00 3.07E+00 3.13E+00 3.13E+00 3.08E+00 3.13E+00 3.10E+00 3.12E+00
F17 3.32E+00 3.04E+00 2.95E+00 3.04E+00 3.03E+00 3.02E+00 3.04E+00 3.04E+00 3.02E+00 3.04E+00 3.02E+00 3.02E+00 3.04E+00 2.99E+00 3.01E+00
F18 1.92E+01 1.92E+01 1.92E+01 1.92E+01 1.92E+01 1.92E+01 1.92E+01 1.92E+01 1.86E+01 1.92E+01 1.92E+01 1.92E+01 1.92E+01 1.92E+01 1.92E+01
F19 4.16E+00 4.15E+00 4.15E+00 4.16E+00 3.89E+00 4.13E+00 4.14E+00 4.15E+00 3.39E+00 4.16E+00 4.04E+00 4.11E+00 4.08E+00 4.15E+00 4.03E+00
F20 1.91E+00 1.91E+00 1.91E+00 1.91E+00 1.91E+00 1.91E+00 1.91E+00 1.91E+00 1.91E+00 1.91E+00 1.91E+00 1.91E+00 1.91E+00 1.91E+00 1.91E+00
F21 0.00E+00 3.88E12 1.94E132 4.70E135 2.69E44 7.30E08 3.21E36 7.30E06 1.13E113 8.06E25 2.37E142 1.09E04 7.20E14 1.01E15 1.10E223
F22 1.80E+00 1.80E+00 1.80E+00 1.80E+00 1.80E+00 1.80E+00 1.80E+00 1.80E+00 1.74E+00 1.80E+00 1.77E+00 1.80E+00 1.80E+00 1.80E+00 1.77E+00
F23 0.00E+00 4.29E09 6.65E04 0.00E+00 0.00E+00 4.31E05 3.32E23 2.40E04 2.63E32 5.78E15 2.08E06 1.81E02 6.27E10 6.98E14 1.82E02
F24 0.00E+00 3.29E10 2.05E05 0.00E+00 0.00E+00 7.15E05 1.39E28 2.22E06 0.00E+00 2.47E17 1.12E09 1.16E03 6.76E19 5.91E17 5.85E04
F25 0.00E+00 1.30E02 7.48E02 1.87E03 9.17E04 4.82E02 2.27E02 2.17E02 6.22E06 7.53E04 1.92E01 6.63E02 6.12E03 5.82E03 2.02E+00
F26 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 1.35E10 0.00E+00 5.46E06 1.88E02 1.22E11 0.00E+00 2.47E03 0.00E+00 1.22E15 2.34E05
F27 2.93E01 2.93E01 2.93E01 2.93E01 2.93E01 2.93E01 2.93E01 2.93E01 3.00E01 2.93E01 2.93E01 2.93E01 2.93E01 2.93E01 2.93E01
F28 1.03E+00 1.03E+00 1.03E+00 1.03E+00 1.03E+00 1.03E+00 1.03E+00 1.03E+00 1.03E+00 1.03E+00 1.03E+00 1.03E+00 1.03E+00 1.03E+00 1.03E+00
𝑔𝑗(𝑥)0,(𝑗= 1,, 𝑞)𝑓 𝑜𝑟 𝑖𝑛𝑒𝑞 𝑢𝑎𝑙𝑖𝑡𝑦 𝑐 𝑜𝑛𝑠𝑡𝑟𝑎𝑖𝑛𝑡𝑠 (25)
𝑗(𝑥)= 0,(𝑗=𝑞+ 1,, 𝑚)𝑓 𝑜𝑟 𝑒𝑞𝑢𝑎𝑙 𝑖𝑡𝑦 𝑐𝑜𝑛𝑠𝑡𝑟𝑎𝑖𝑛𝑡𝑠 (26)
4.2.2. Mathematical problems (G01 to G13)
Thirteen constrained benchmark functions (G01 to G13) were used in this section in order to evaluate the performance of the
proposed algorithm and compare its performance with other algorithms. A summary of the specifications of these functions can be
seen in Table 17. Details of the functions are also provided in Appendix B of the article. Table 18 shows the results for the mean
values of the objective function. It can be seen that the algorithm has provided acceptable performance in achieving optimal or
near-optimal responses. More details of the results are available in the electronic supplementary file (Sup. 1) attached to the article.
19
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Fig. 17. The average fitness values for F1 to F18.
4.2.3. Engineering problems (E1 to E7)
The last group of problems considered for evaluating the TS algorithm includes a set of seven different engineering problems,
the specifications of which are presented in Tables 19 to 20. Details of the functions and constraints of these problems can be seen
in Appendix B. the problems include multiple disk clutch brake (E1), pressure vessel design (E2), rolling element bearing (E3),
20
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Fig. 18. The average fitness values for F19 to F28.
Table 11
The considered large-dimensional problems.
Name in this article Name of the function Dimension (D) Bound Global minima
F29 Ackley 30, 100, 500, 1000 [32.768 32.768] 0
F30 Dixon-price 30, 100, 500, 1000 [10 10] 0
F31 Griewank 30, 100, 500, 1000 [600 600] 0
F32 Levy 30, 100, 500, 1000 [10 10] 0
F33 Levy function n.13 30, 100, 500, 1000 [10 10] 0
F34 Rastrigin 30, 100, 500, 1000 [5.12 5.12] 0
F35 Rosenbrock 30, 100, 500, 1000 [2.048 2.048] 0
F36 Rotated hyper-ellipsoid 30, 100, 500, 1000 [65.536 65.536] 0
F37 Schwefel 30, 100, 500, 1000 [500 500] 0
F38 Sphere 30, 100, 500, 1000 [5.12 5.12] 0
F39 Styblinski-Tang 30, 100, 500, 1000 [5 5] 39.16599 D
F40 Sum of different powers 30, 100, 500, 1000 [1 1] 0
F41 Sum squares 30, 100, 500, 1000 [10 10] 0
F42 Trid 30, 100, 500, 1000 [D2D2]D(D +4)(D 1)/6
F43 Zakharov 30, 100, 500, 1000 [5 10] 0
speed reducer (E4), tension–compression spring (E5), three bar truss (E6), welded beam design (E7). Fig. 20 shows the considered
engineering problems of this article.
Based on the results obtained from an average of 30 implementations related to the engineering benchmark problems (Tables 21
and 22), the proposed algorithm has a suitable efficiency in finding an acceptable and optimal solution. It also has a good ability to
provide near-optimal objective values compared to other algorithms. Further details of the results are available in Supplementary
material file (Sup. 1).
21
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Fig. 19. The average fitness values for F29 to F43 for D =30, 100, 500 and 1000.
5. Evaluation of the TS algorithm
5.1. Errors
In this section, the errors of the considered algorithm, for the whole 73 examples were calculated. Since a number of fourteen
algorithms (𝐴= 1, . . . ,14) were used to compare the results, therefore the authors considered a normalization equation (Eq. (27))
which placed the errors between 0 and 1. It can give a suitable overview of the results obtained by the algorithms. The parameter
𝑌𝐺and 𝑌𝑃in this equation indicate the goal and the estimated values for the fitness.
𝐸𝑟𝑟𝑜𝑟 =
𝑌𝐺𝑌𝑃
max{𝑌𝑃 ,1,, 𝑌𝑃 ,𝐴},1𝐴14 (27)
22
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Table 12
The average results of the algorithms for F19 to F28 (D =30).
Function Goal TS HHO TLBO PSO GA DE ABC BA CS HS GW ICA SS WO
F29 0.00E+00 2.89E04 8.88E16 6.34E15 4.45E01 3.78E01 6.51E03 2.11E01 1.28E+01 5.63E+00 1.07E+01 4.46E14 1.32E04 2.09E+00 4.09E15
F30 0.00E+00 6.72E01 2.49E01 6.67E01 6.72E01 4.36E+00 1.25E+00 2.62E+00 6.68E01 9.29E+00 5.67E+03 6.67E01 1.89E+00 2.18E+00 6.67E01
F31 0.00E+00 2.63E06 0.00E+00 0.00E+00 1.50E02 9.09E01 6.46E03 3.10E02 1.57E02 1.13E+00 2.68E+01 2.51E03 2.18E02 1.12E02 2.15E03
F32 0.00E+00 4.41E02 2.74E05 3.85E01 1.08E+00 4.01E+00 2.24E05 2.18E05 9.73E+00 1.27E+01 6.77E+00 9.68E01 6.21E10 5.15E+00 2.04E01
F33 0.00E+00 1.82E19 5.93E07 1.35E31 1.35E31 8.78E04 1.35E31 1.10E01 2.56E02 1.59E20 5.44E23 3.85E07 1.35E31 9.72E14 6.10E07
F34 0.00E+00 9.75E07 0.00E+00 1.06E+01 4.11E+01 8.44E+00 8.83E+01 7.11E+00 8.02E+01 1.07E+02 6.23E+01 2.98E+00 1.13E+00 4.98E+01 0.00E+00
F35 0.00E+00 2.82E+01 7.86E04 2.32E+01 2.42E+01 3.79E+01 2.86E+01 3.26E+01 9.34E+00 3.47E+01 2.58E+02 2.68E+01 5.01E+01 2.71E+01 2.63E+01
F36 0.00E+00 1.41E05 1.21E95 1.11E85 1.25E07 8.71E+00 3.33E03 1.85E02 5.41E04 8.83E+01 1.37E+04 3.27E32 1.40E07 2.41E+01 3.83E84
F37 0.00E+00 2.42E+03 8.18E+01 4.43E+03 6.14E+03 4.53E+03 3.10E+03 1.08E+03 5.32E+03 4.30E+03 1.23E+03 6.46E+03 6.32E+01 4.82E+03 1.24E+03
F38 0.00E+00 4.32E09 1.66E104 2.26E89 3.88E11 4.29E03 1.45E06 2.80E06 7.72E06 3.82E02 7.57E+00 8.05E36 7.67E11 5.29E11 3.09E87
F39 1.17E+03 1.08E+03 1.17E+03 1.03E+03 1.02E+03 1.02E+03 1.17E+03 1.17E+03 9.92E+02 1.01E+03 1.12E+03 9.36E+02 1.17E+03 1.00E+03 1.11E+03
F40 0.00E+00 2.59E39 3.72E126 3.84E197 1.95E31 7.09E08 9.40E24 1.64E08 1.71E09 4.24E11 5.53E05 3.08E111 2.77E23 7.73E07 6.97E129
F41 0.00E+00 1.96E07 2.79E101 1.38E87 2.49E09 2.02E01 7.00E05 5.82E04 4.76E04 1.71E+00 3.12E+02 4.06E34 5.05E09 1.07E+00 2.91E85
F42 4.93E+03 3.01E+01 3.88E+03 1.06E+03 1.43E+03 8.60E+03 1.34E+05 1.05E+04 1.28E+01 5.44E+03 2.57E+05 2.93E+02 9.08E+03 2.05E+03 3.12E+03
F43 0.00E+00 1.51E07 6.83E59 4.75E09 9.96E01 1.97E+02 2.72E+02 2.90E+02 3.08E05 1.40E+02 1.32E+02 2.03E11 2.33E+02 1.73E+01 5.03E+02
Table 13
The average results of the algorithms for F19 to F28 (D =100).
Function Goal TS HHO TLBO PSO GA DE ABC BA CS HS GW ICA SS WO
F29 0.00E+00 5.60E03 8.88E16 7.99E15 3.82E+00 3.79E+00 7.74E+00 1.25E+01 1.33E+01 1.02E+01 1.85E+01 6.88E09 4.51E+00 7.65E+00 4.20E15
F30 0.00E+00 9.66E01 2.51E01 6.67E01 1.16E+02 7.40E+02 6.30E+04 4.01E+02 1.56E+00 8.38E+03 3.88E+06 6.67E01 2.18E+02 7.49E+02 6.67E01
F31 0.00E+00 1.42E03 0.00E+00 0.00E+00 1.39E+00 3.86E+00 2.18E+01 7.03E+00 4.30E+00 2.67E+01 7.16E+02 2.98E03 1.05E+00 4.27E+00 0.00E+00
F32 0.00E+00 4.65E+00 3.67E05 3.95E+00 1.23E+01 2.40E+01 5.55E+01 8.04E+00 2.01E+01 6.67E+01 2.44E+02 6.17E+00 3.92E01 1.68E+01 1.33E+00
F33 0.00E+00 7.39E20 2.37E07 1.35E31 1.35E31 1.01E02 1.35E31 1.01E01 2.20E02 3.95E20 4.69E23 5.02E07 1.35E31 9.53E14 2.89E06
F34 0.00E+00 2.17E03 0.00E+00 3.30E+00 1.49E+02 1.84E+02 8.10E+02 2.63E+02 2.23E+02 6.24E+02 7.37E+02 7.06E+00 6.12E+01 1.60E+02 0.00E+00
F35 0.00E+00 9.88E+01 1.01E02 9.56E+01 1.31E+02 2.80E+02 1.90E+03 6.65E+02 1.00E+02 4.18E+02 9.73E+03 9.71E+01 3.69E+02 2.43E+02 9.77E+01
F36 0.00E+00 6.15E02 1.11E97 1.46E76 1.26E+03 6.80E+03 3.62E+04 8.29E+03 7.13E+02 4.87E+04 1.43E+06 5.10E14 1.35E+02 1.59E+04 1.79E80
F37 0.00E+00 1.68E+04 7.11E01 2.43E+04 2.22E+04 1.87E+04 2.56E+04 1.25E+04 1.90E+04 2.21E+04 1.82E+04 2.51E+04 3.41E+03 1.93E+04 5.93E+03
F38 0.00E+00 6.99E06 2.11E96 2.51E80 1.31E01 8.66E01 6.15E+00 4.38E+00 1.69E04 7.47E+00 2.13E+02 1.22E17 2.04E02 8.63E01 5.26E86
F39 3.92E+03 3.20E+03 3.92E+03 3.21E+03 3.31E+03 3.25E+03 2.41E+03 3.41E+03 3.29E+03 2.78E+03 2.77E+03 2.43E+03 3.89E+03 3.13E+03 3.81E+03
F40 0.00E+00 7.80E37 2.19E131 5.18E197 1.34E22 5.93E07 5.33E04 1.26E03 2.32E09 3.51E09 8.75E04 1.94E78 3.46E15 7.35E07 3.59E123
F41 0.00E+00 9.59E04 4.15E95 2.85E78 2.21E+01 1.42E+02 8.37E+02 2.17E+02 8.74E01 1.15E+03 3.43E+04 1.50E15 3.04E+00 3.41E+02 2.67E81
F42 1.72E+05 9.28E+01 3.58E+04 5.05E+02 2.20E+04 8.60E+04 6.56E+06 5.96E+05 1.01E+05 1.75E+05 6.92E+06 1.20E+02 2.03E+05 5.09E+04 2.19E+04
F43 0.00E+00 2.23E03 1.36E15 8.37E+01 9.73E+02 1.14E+03 1.84E+03 1.40E+03 5.18E+00 1.25E+15 1.46E+03 1.57E+01 1.93E+03 1.55E+03 1.70E+03
Table 14
The average results of the algorithms for F19 to F28 (D =500).
Function Goal TS HHO TLBO PSO GA DE ABC BA CS HS GW ICA SS WO
F29 0.00E+00 7.64E03 8.88E16 7.99E15 1.15E+01 1.47E+01 2.08E+01 2.02E+01 1.37E+01 1.23E+01 2.05E+01 6.42E04 1.96E+01 1.28E+01 4.09E15
F30 0.00E+00 1.04E+00 2.52E01 6.67E01 1.18E+06 8.36E+06 4.73E+08 3.82E+08 1.68E+04 1.56E+06 4.01E+08 8.64E01 5.13E+07 1.59E+06 6.69E01
F31 0.00E+00 3.96E03 0.00E+00 1.11E17 3.23E+02 1.12E+03 8.63E+03 7.59E+03 4.88E+02 4.55E+02 8.65E+03 2.33E03 2.07E+03 5.18E+02 0.00E+00
F32 0.00E+00 4.38E+01 3.38E04 3.68E+01 2.00E+02 3.60E+02 5.38E+03 2.64E+03 1.11E+02 4.22E+02 3.22E+03 3.98E+01 8.03E+02 2.32E+02 9.80E+00
F33 0.00E+00 4.83E20 7.64E07 1.35E31 1.35E31 3.35E02 1.35E31 1.08E01 2.93E02 2.19E20 1.71E23 4.30E07 3.66E03 8.52E14 1.42E06
F34 0.00E+00 8.36E03 0.00E+00 0.00E+00 2.34E+03 2.77E+03 7.36E+03 5.59E+03 8.06E+02 4.44E+03 6.61E+03 5.75E+01 3.68E+03 2.77E+03 3.03E14
F35 0.00E+00 4.99E+02 6.11E02 4.96E+02 3.25E+03 7.76E+03 2.25E+05 1.32E+05 6.69E+02 4.26E+03 1.60E+05 4.97E+02 3.00E+04 4.02E+03 4.95E+02
F36 0.00E+00 2.69E+00 7.90E98 4.39E72 3.22E+06 1.15E+07 8.34E+07 8.93E+07 1.78E+06 5.07E+06 9.64E+07 1.37E02 2.03E+07 5.49E+06 1.20E77
F37 0.00E+00 1.44E+05 2.55E+00 1.66E+05 1.31E+05 1.32E+05 1.73E+05 1.39E+05 1.34E+05 1.59E+05 1.57E+05 1.53E+05 9.59E+04 1.43E+05 1.76E+04
F38 0.00E+00 4.34E05 4.63E100 1.69E76 8.99E+01 3.14E+02 2.52E+03 2.20E+03 9.38E01 1.32E+02 2.53E+03 4.29E07 6.13E+02 1.50E+02 8.69E85
F39 1.96E+04 1.25E+04 1.96E+04 9.63E+03 1.04E+04 1.08E+04 3.67E+03 9.32E+03 1.58E+04 9.94E+03 8.63E+03 7.95E+03 1.46E+04 1.13E+04 1.85E+04
F40 0.00E+00 5.48E36 2.56E132 3.43E196 2.43E17 9.41E06 2.25E+00 1.09E+00 2.58E09 3.57E08 1.10E02 4.68E05 2.62E03 1.05E06 3.73E125
F41 0.00E+00 6.14E02 3.78E95 1.59E73 7.80E+04 3.03E+05 1.93E+06 2.04E+06 7.95E+03 1.17E+05 2.23E+06 3.09E04 4.85E+05 1.22E+05 2.32E79
F42 2.10E+07 4.98E+02 5.64E+05 8.85E+01 2.13E+06 7.83E+06 9.56E+07 5.66E+07 3.82E+06 3.42E+06 7.91E+07 2.55E+02 1.57E+07 3.66E+06 5.43E+05
F43 0.00E+00 4.08E+00 2.38E+03 1.66E+03 7.54E+03 1.59E+19 1.19E+04 8.67E+03 1.11E+17 5.97E+20 5.54E+12 3.66E+03 1.33E+04 1.06E+04 7.99E+03
Table 15
The average results of the algorithms for F19 to F28 (D =1000).
Function Goal TS HHO TLBO PSO GA DE ABC BA CS HS GW ICA SS WO
F29 0.00E+00 8.59E03 8.88E16 9.18E15 1.37E+01 1.65E+01 2.11E+01 2.07E+01 1.38E+01 1.25E+01 2.08E+01 8.00E03 2.04E+01 1.32E+01 5.15E15
F30 0.00E+00 1.20E+00 3.34E01 6.67E01 1.50E+07 7.53E+07 3.48E+09 2.48E+09 1.27E+06 9.25E+06 2.13E+09 4.41E+00 9.39E+08 1.16E+07 6.72E01
F31 0.00E+00 4.42E03 0.00E+00 1.11E16 1.30E+03 3.56E+03 2.53E+04 2.14E+04 1.56E+03 1.10E+03 2.06E+04 1.12E02 1.21E+04 1.36E+03 0.00E+00
F32 0.00E+00 9.06E+01 8.36E04 8.30E+01 6.64E+02 1.33E+03 1.18E+04 8.22E+03 2.51E+02 9.11E+02 7.96E+03 8.25E+01 4.08E+03 7.07E+02 1.88E+01
F33 0.00E+00 2.43E20 8.97E07 1.35E31 1.35E31 7.68E02 1.35E31 1.02E01 6.09E02 3.09E20 5.40E23 2.96E07 1.35E31 8.73E14 1.33E06
F34 0.00E+00 2.99E02 0.00E+00 0.00E+00 6.39E+03 7.68E+03 1.64E+04 1.41E+04 1.88E+03 9.48E+03 1.47E+04 1.60E+02 1.13E+04 7.00E+03 1.21E13
F35 0.00E+00 9.99E+02 3.95E02 9.96E+02 1.05E+04 2.91E+04 5.99E+05 4.07E+05 2.81E+03 9.88E+03 4.07E+05 9.97E+02 1.82E+05 1.05E+04 9.90E+02
F36 0.00E+00 1.46E+01 3.28E97 6.70E71 2.85E+07 8.12E+07 5.40E+08 5.02E+08 1.73E+07 2.49E+07 4.66E+08 8.95E+00 2.53E+08 3.12E+07 1.30E78
F37 0.00E+00 3.27E+05 9.75E+00 3.53E+05 2.98E+05 3.21E+05 3.67E+05 3.31E+05 3.04E+05 3.47E+05 3.44E+05 3.18E+05 2.66E+05 3.23E+05 5.17E+04
F38 0.00E+00 1.18E04 1.65E100 9.74E76 3.84E+02 1.02E+03 7.38E+03 6.21E+03 5.17E+01 3.28E+02 6.03E+03 1.33E04 3.61E+03 4.04E+02 5.47E81
F39 3.92E+04 2.47E+04 3.92E+04 1.58E+04 1.77E+04 1.81E+04 6.18E+03 1.30E+04 2.87E+04 1.78E+04 1.42E+04 1.35E+04 2.17E+04 1.98E+04 3.81E+04
F40 0.00E+00 1.02E36 1.04E129 5.86E199 2.05E17 1.20E04 2.58E+00 1.49E+00 3.05E09 1.53E07 2.47E02 1.34E03 9.56E01 1.56E06 6.22E127
F41 0.00E+00 2.48E01 4.49E95 1.13E72 6.49E+05 1.91E+06 1.25E+07 1.17E+07 1.64E+05 5.78E+05 1.09E+07 1.93E01 5.98E+06 7.11E+05 1.47E78
F42 1.67E+08 1.00E+03 1.26E+06 5.94E+02 9.10E+06 2.85E+07 2.28E+08 1.70E+08 1.26E+07 9.08E+06 1.87E+08 6.72E+02 7.69E+07 1.12E+07 1.33E+06
F43 0.00E+00 5.04E+02 9.32E+03 3.54E+03 1.53E+04 2.65E+22 2.57E+04 2.41E+17 1.17E+21 1.55E+23 7.14E+20 8.16E+03 2.81E+04 1.97E+04 1.59E+04
Using the above equation, the error value of the algorithms can be determined and compared with each other. The results for each
of the five problem groups of this article are presented in Fig. 21. In the following, the average error of algorithms in providing an
optimal response for the constrained and unconstrained functions as well as CEC functions is determined, the results of which can
be seen in Table 23.
Table 23 provides very interesting information about the performance of each algorithm in solving the whole 73 optimization
problems. Based on this table, the proposed algorithm in this article, by presenting an error value equal to 1.33E01, 6.70E02,
and 4.04E01 for unconstrained, constrained, and CEC problems, respectively, has an error value very close to the best algorithm
in each of the three groups. Moreover, by determining the average value of all three groups, the total error of the TS algorithm is
23
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Table 16
The average results of the algorithms for C1 to C10.
Algorithm C 1 C 2 C 3 C 4 C 5 C 6 C 7 C 8 C 9 C 10
TS 5.10E+04 1.73E+01 1.27E+01 3.50E+01 1.10E+00 4.01E+00 1.31E+02 5.10E+00 2.76E+00 2.00E+01
HHO 5.22E+04 1.74E+01 1.27E+01 1.54E+02 2.39E+00 9.10E+00 2.96E+02 5.66E+00 3.11E+00 2.01E+01
TLBO 1.45E+08 1.73E+01 1.27E+01 2.06E+01 1.07E+00 1.03E+01 4.65E+02 3.96E+00 2.34E+00 1.96E+01
PSO 4.06E+08 1.73E+01 1.27E+01 1.53E+01 1.10E+00 7.28E+00 1.51E+02 5.02E+00 2.35E+00 1.81E+01
GA 2.59E+10 1.96E+01 1.27E+01 2.51E+01 1.12E+00 4.71E+00 1.98E+02 4.79E+00 2.74E+00 2.00E+01
DE 1.52E+10 1.73E+01 1.27E+01 1.96E+01 1.16E+00 7.99E+00 2.08E+02 5.38E+00 2.48E+00 2.01E+01
ABC 8.11E+10 1.46E+02 1.27E+01 3.98E+01 1.06E+00 7.96E+00 9.39E+01 5.21E+00 3.14E+00 1.98E+01
BA 1.27E+10 1.73E+01 1.27E+01 1.49E+03 2.42E+00 1.23E+01 2.92E+02 5.35E+00 2.35E+00 2.01E+01
CS 2.55E+14 1.73E+01 1.27E+01 2.55E+01 1.06E+00 9.22E+00 7.13E+01 5.15E+00 2.75E+00 2.02E+01
HS 2.10E+10 1.73E+01 1.27E+01 6.50E+01 1.22E+00 5.78E+00 5.80E+01 3.71E+00 3.53E+00 1.99E+01
GW 9.12E+07 1.74E+01 1.27E+01 4.87E+01 1.43E+00 1.09E+01 3.72E+02 4.59E+00 4.36E+00 2.04E+01
ICA 5.41E+09 1.73E+01 1.27E+01 4.22E+01 1.21E+00 3.25E+00 2.21E+01 5.10E+00 2.77E+00 2.00E+01
SS 4.36E+09 1.73E+01 1.27E+01 3.16E+01 1.28E+00 4.80E+00 2.76E+02 5.34E+00 2.45E+00 2.00E+01
WO 1.80E+10 1.73E+01 1.27E+01 2.58E+02 1.81E+00 9.58E+00 5.66E+02 5.79E+00 4.51E+00 2.02E+01
Goal 1.00E+00 1.00E+00 1.00E+00 1.00E+00 1.00E+00 1.00E+00 1.00E+00 1.00E+00 1.00E+00 1.00E+00
Table 17
Summary information for the functions G01 to G13.
Name in this article Goal Dimension Number of
constraints
Bound Global best
G01 Minimization 13 9 [0 U]
U={1,1,. . . ,1,100,100,100,1}
15
G02 Maximization n (in this research, n =20) 2 [0 10] 0.803619
G03 Maximization n (in this research, n =3) 1 [0 1] 1
G04 Minimization 5 6 [V U]
U={102,45,45,45,45}
V={78,33,27,27,27}
30 665.539
G05 Minimization 4 5 [V U]
U={0,0,0.55,0.55}
V={1200,1200,0.55,0.55}
5126.4981
G06 Minimization 2 2 [V 100]
V={13,0}
6961.81388
G07 Minimization 10 8 [10 10] 24.3062091
G08 Maximization 2 2 [1 10] 0.095825
G09 Minimization 7 4 [10 10] 680.6300573
G10 Minimization 8 6 [V U]
U=1000 ×{10,10,10,1,1,1,1,1}
V=10 ×{10,100,100,1,1,1,1,1}
7049.3307
G11 Minimization 2 1 [1 1] 0.75
G12 Minimization 3 1 [0 10] 1
G13 Minimization 5 3 [V U]
U={2.3,2.3,2.3,2.3,2.3}
V= −{2.3,2.3,2.3,2.3,2.3}
0.0539498
Table 18
The average results of the algorithms for G01 to G13.
Algorithm G 01 G 02 G 03 G 04 G 05 G 06 G 07 G 08 G 09 G 10 G 11 G 12 G 13
TS 1.50E+01 6.48E01 4.56E01 3.06E+04 5.29E+03 6.96E+03 2.77E+01 9.58E02 6.81E+02 8.02E+03 9.90E01 1.00E+00 1.21E+00
HHO 1.42E+01 5.30E01 6.71E16 3.05E+04 5.47E+03 6.96E+03 3.32E+01 9.58E02 6.83E+02 1.54E+04 1.00E+00 1.00E+00 9.59E01
TLBO 1.39E+01 5.19E01 2.31E02 3.07E+04 5.53E+03 6.96E+03 2.45E+01 9.58E02 6.81E+02 7.25E+03 9.71E01 1.00E+00 1.42E+00
PSO 1.25E+01 3.29E01 5.93E01 3.07E+04 4.37E+15 6.96E+03 2.50E+01 9.58E02 6.81E+02 7.56E+03 8.96E01 1.00E+00 1.24E+00
GA 1.05E+01 3.73E01 6.99E02 3.03E+04 5.31E+03 6.21E+03 3.36E+01 9.36E02 6.85E+02 8.75E+03 9.87E01 1.00E+00 9.72E01
DE 1.48E+01 8.00E01 0.00E+00 3.07E+04 5.32E+03 6.96E+03 2.46E+01 9.58E02 6.81E+02 7.36E+03 1.00E+00 1.00E+00 9.72E01
ABC 1.38E+01 5.86E01 0.00E+00 3.05E+04 5.20E+03 6.82E+03 3.17E+01 9.58E02 6.84E+02 8.06E+03 1.00E+00 1.00E+00 1.24E+00
BA 1.17E+01 2.74E01 3.56E01 3.07E+04 5.47E+03 6.96E+03 2.53E+01 7.32E02 6.81E+02 1.06E+04 9.50E01 9.41E01 1.12E+00
CS 1.48E+01 5.20E01 0.00E+00 3.07E+04 3.61E+03 9.70E+04 2.21E+03 9.58E02 6.81E+02 1.39E+04 1.00E+00 1.00E+00 1.19E+13
HS 1.46E+01 7.44E01 3.30E01 3.06E+04 5.36E+03 6.03E+03 3.23E+01 9.58E02 6.88E+02 9.67E+03 8.84E01 1.00E+00 9.98E01
GW 1.12E+01 7.59E01 1.29E16 3.07E+04 5.74E+03 6.96E+03 4.08E+01 9.58E02 6.83E+02 7.82E+03 9.94E01 1.00E+00 9.87E01
ICA 2.55E+02 6.13E01 3.21E+00 3.12E+04 4.78E+02 6.97E+03 5.80E+02 4.98E+01 9.62E+02 5.96E+03 4.80E02 1.00E+00 1.34E05
SS 1.50E+01 4.27E01 4.77E16 3.06E+04 5.43E+03 6.96E+03 2.62E+01 9.58E02 6.81E+02 9.61E+03 1.00E+00 1.00E+00 1.27E+00
WO 7.12E+00 4.63E01 0.00E+00 3.03E+04 1.89E+19 1.56E+19 6.62E+01 9.58E02 6.97E+02 1.60E+18 1.00E+00 1.00E+00 6.54E+13
Goal 1.50E+01 8.04E01 1.00E+00 3.07E+04 5.13E+03 6.96E+03 2.43E+01 9.58E02 6.81E+02 7.05E+03 7.50E01 1.00E+00 5.39E02
24
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Fig. 20. The figures of the considered engineering problems.
25
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Table 19
Information of the engineering problems of this research.
Name in
this article
Problem Goal Global Best Dimension Bound Number of
constraints
E1 Multiple disk clutch brake [71] Minimize weight 0.2597 5 X1[60 80]
X2[90 110]
X3[1 3]
X4[600 1000]
X5[2 10]
8
E2 Pressure vessel design [72] Minimize cost 6000.46259 4 X1[0.1 99]
X2[0.1 99]
X3[10 500]
X4[1 240]
3
E3 Rolling element bearing [73] Maximize dynamic load 83 011.88 10 X1[0.5(D +d) 0.6(D +d)]
X2[0.15(D d) 0.45(D d)]
X3[4 50]
X4, X5 [0.515 0.6]
X6[0.4 0.5]
X7[0.6 0.7]
X8[0.3 0.4]
X9[0.02 0.1]
X10 [0.6 0.85]
9
E4 Speed reducer [58] Minimize weight 2993 7 X1[2.6 3.6]
X2[0.7 0.8]
X3[17 28]
X4[7.3 8.3]
X5[7.8 8.3]
X6[2.9 3.9]
X7[5 5.5]
11
E5 Tension–compression spring [74] Minimize weight 0.0126 3 X1[0.01 100]
X2[0.1 100]
X3[2 100]
4
E6 Three-bar truss [75] Minimize weight 263.8958 2 X1(0 2]
X2[0.1 1]
3
E7 Welded beam design [76] Minimize cost 1.731 4 X1, X2, X3,X4[0.1 10] 6
Table 20
Definition of the variables of the considered engineering problems.
Variable Multiple disk
clutch brake
Pressure
vessel design
Rolling element bearing Speed reducer Tension–
compression
spring
Three-bar
truss
Welded beam
design
X1Inner radius Thickness of
the head
Pitch diameter Face width Number of
coils
Area of the
side elements
Thickness of
the element
X2Outer radius Thickness of
the shell
Ball diameter Module of teeth Coil diameter Area of the
center
element
Thickness of
the weld
X3Thickness of
disks
Radius Number of balls Number of teeth Wire
diameter
– Length
X4Actuating
force
Length 𝑟𝑖𝑥2, (𝑟𝑖is the inner raceway
groove curvature radius)
Length of the
first shaft
– – Height
X5Number of
disks
𝑟0𝑥2, (r0is the outer raceway
groove curvature radius)
Length of the
second shaft
– – –
X6 Minimum value of the ball
diameter constants
Diameter of the
first shaft
– – –
X7 Maximum value of the ball
diameter constants
Diameter of the
second shaft
– – –
X8 Coefficient of 𝑏𝑤 – – –
X9 Constant value based on the
mobility conditions of the
balls
– – –
X10 Constant value obtained from
the simple strength
consideration of the outer ring
– – –
26
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Table 21
The average results for the engineering problems (E1 to E7).
Algorithm E 1 E 2 E 3 E 4 E 5 E 6 E 7
TS 2.60E01 5.87E+03 8.53E+04 2.99E+03 1.30E02 2.64E+02 1.96E+00
HHO 2.87E01 6.84E+03 8.01E+04 3.08E+03 1.38E02 2.64E+02 1.85E+00
TLBO 2.60E01 5.80E+03 8.55E+04 2.99E+03 1.27E02 2.64E+02 1.72E+00
PSO 2.63E01 6.12E+03 8.55E+04 2.99E+03 1.46E02 2.64E+02 1.75E+00
GA 2.65E01 6.35E+03 7.73E+04 2.99E+03 1.53E01 2.64E+02 2.83E+00
DE 2.60E01 5.95E+03 8.55E+04 2.99E+03 1.28E02 2.64E+02 1.96E+00
ABC 2.63E01 6.27E+03 8.46E+04 2.99E+03 1.48E02 2.64E+02 2.12E+00
BA 2.89E01 6.37E+03 8.55E+04 3.00E+03 1.66E02 2.64E+02 1.87E+00
CS 2.60E01 5.80E+03 8.20E+04 2.99E+03 9.79E+06 2.64E+02 1.86E+00
HS 2.65E01 6.84E+03 8.08E+04 2.99E+03 1.05E01 2.64E+02 2.77E+00
GW 2.60E01 5.87E+03 8.54E+04 3.00E+03 1.27E02 2.64E+02 1.73E+00
ICA 2.60E01 5.94E+03 2.66E+05 2.84E+03 1.05E+04 6.69E+01 2.05E+00
SS 3.35E01 6.79E+03 8.44E+04 3.01E+03 1.61E02 2.64E+02 1.98E+00
WO 2.80E01 7.31E+03 8.27E+04 3.06E+03 1.41E02 2.64E+02 1.92E+00
Goal 2.60E01 6.00E+03 8.30E+04 2.99E+03 1.26E02 2.64E+02 1.73E+00
Table 22
The best solution obtained by the algorithm for the engineering problems (E1 to E7).
Problem Solution Fitness
X1X2X3X4X5X6X7X8X9X10
E1 70.000 90.000 1.000 1000.000 2.313 – – – – – 0.2598
E2 0.728 0.360 37.704 239.923 – – – – – – 5804.5158
E3 125.723 21.423 11.001 0.515 0.515 0.500 0.700 0.300 0.091 0.601 85539.1556
E4 3.500 0.700 17.000 7.300 7.715 3.350 5.287 – – – 2994.4711
E5 0.052 0.354 11.438 – – – – – – 0.0127
E6 0.788 0.409 – – – – – – 263.8960
E7 0.205 3.457 9.100 0.206 – – – – – – 1.7323
Fig. 21. The average errors.
27
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Table 23
Errors (Average).
Problem type TS HHO TLBO PSO GA DE ABC BA CS HS GW ICA SS WO
Unconstrained problems 1.33E01 1.12E01 1.47E01 2.23E01 2.27E01 3.83E01 3.68E01 4.12E01 2.34E01 5.36E01 2.70E01 1.90E01 2.01E01 2.16E01
Constrained problems 6.70E02 1.89E01 6.69E02 6.71E02 1.91E01 5.51E02 1.06E01 1.65E01 1.93E01 1.55E01 5.11E02 4.93E01 2.05E01 3.77E01
CEC functions 4.04E01 5.99E01 4.78E01 4.12E01 4.17E01 4.54E01 5.31E01 6.90E01 5.38E01 4.10E01 5.69E01 3.86E01 4.45E01 6.60E 01
Average (Total) 2.01E01 3.00E01 2.30E01 2.34E01 2.78E01 2.97E01 3.35E01 4.23E01 3.22E01 3.67E01 2.96E01 3.57E01 2.84E01 4.18E01
Fig. 22. Average Error (73 Problems with 30 runs for each one) and the rank of the TS algorithm.
equal to 2.01E01, which shows the best performance compared to the other 13 algorithms. The above results are also illustrated
in Fig. 22.
5.2. Algorithm complexity
In this section, the complexity of the algorithm is calculated by Eqs. (28) to (30). For this purpose, the approach presented by
Kumar et al. [77] has been used. To determine the complexity, the authors have used the results obtained for the 20 constrained
problems defined in this article and determine the amount of complexity for each algorithm. The results can be seen in Fig. 23.
Based on the results, the complexity of the algorithm presented in this paper has a reasonable and acceptable value.
𝐶𝑜𝑚𝑝𝑙𝑒𝑥𝑖𝑡𝑦 =𝑇2𝑇1
𝑇1
(28)
In which,
𝑇1=20
1𝑡1
20 (29)
𝑇2=20
1𝑡2
20 (30)
In which, the constant 20 is the number of test problems solved. For T1, the parameter t1is computed by measuring the time
to evaluate a single solution 200,000 times in test problem ‘‘i’’.
For T2, each t2is computed by measuring the time of the complete algorithm to carry out 200,000 evaluations in test problem
‘‘i’’.
5.3. Non-parametric tests
The non-parametric Wilcoxon rank-sum test were done for the whole examples of this research with consideration of 5%
significance to determine the significant differences between the considered algorithms and TS. The results are provided in Sup
2attached to this article.
6. Discussion
In the previous sections, the performance of the proposed algorithm for a large number of benchmark problems (73 different
examples) was evaluated. Also, its results were compared with 13 well-known algorithms, the results of which indicate the
satisfactory performance of the TS, so that the average has the highest rank between the whole algorithms. An efficient optimization
technique should also have suitable strategies for the exploration and exploitation to presents acceptable solutions. Exploration and
28
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Fig. 23. The complexity of the considered algorithms.
exploitation are the two cornerstones of problem solving by search [78]. The Exploration plays a vital role in preventing the
algorithm from being trapped in local optima. The most important thing that should be considered for an applicable exploration is
definition of random nature to find the solutions at the first stages of each iteration of the algorithm, which makes exploration more
efficient. The second case, namely exploitation, is used to improve the solution found by the algorithm. A metaheuristic method
needs to address a satisfactory equilibrium between exploration and exploitation to produce an acceptable performance [79]. In
the proposed algorithm in this article, there are the galaxy, transit, planet, and neighbor phases for exploration that all use random
strategies and work together for a better result. In each of these phases, several techniques such as transit itself, selection between
positive or negative areas randomly, removing noises SN times, and applying random numbers with different intervals were used
to enhance the quality of the exploration. Moreover, more than one optimized solution exists at the end of the exploration, which
lets the exploitation more flexible to find the best solution. Besides, the use of small noises (SN times) and random parameters with
a limited and small interval can help the last stage of the algorithm (exploitation) more efficient.
There are several parameters such as lights, luminosity ratio, regions (aphelion, perihelion, neutral) that improve the ability of
the algorithm. The use of different parameter types allows the algorithm to handle the issues in the search space. Further, with
consideration of different solutions without sharing their results and fitness together, the evolution of the solutions towards the best
answer is done faster and better. This is to prevent memetic computing manner, improving the population dynamics in iterations,
and increasing the robustness of the TS algorithm. The use of different strategies and techniques, various phases, reasonable balance
between exploration and exploitation, proper use of random processes, and the technique of maintaining optimal responses without
sharing, are some of the main reasons for the acceptable results of the TS in comparison with the other considered algorithms for
the whole 73 benchmark problems.
7. Conclusion
In this article, a novel optimization algorithm namely Transit Search (TS) based on the most powerful method for detecting
exoplanets was presented. Two parameters (signal-to-noise ratio and number of stars) were used in the proposed algorithm, the
product of which is equal to the initial population value in other algorithms. In order to evaluate the performance of the proposed
algorithm, a set of 73 different benchmarks including unconstrained and constrained problems (mathematical, engineering, and
CEC functions) have been used and the performance of the proposed algorithm was compared with 13 well-known algorithms. The
following key findings can be mentioned from the results of this research:
The sensitivity analysis of the algorithm indicated that an increase in the signal-to-noise parameter can improve the algorithm’s
ability to achieve optimal responses in some cases.
After the HHO algorithm, the TS algorithm presented the best performance for the unconstrained problems (43 benchmarks).
The difference between TS and HHO was only 2.1%. This result was found from the average value of the errors for this type
of problem (small/large dimensional).
For the CEC functions (10 benchmarks), the rank of the TS algorithm was the second place after ICA with a 1.8% difference
for the overall average error.
The rank of the TS algorithm for the constrained problems (13 mathematical and 7 engineering benchmarks) was four. The
difference between the best algorithm here (GW) was only 1.59%, which indicates that the performance of the proposed
algorithm for the constrained problems is also acceptable and optimal.
29
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
The total average error for the whole 73 problems discussed in this research was calculated for the 14 algorithms and it was
found that the proposed algorithm presented the best performance with the lowest average error than the other optimization
techniques.
The computational complexity for the TS was obtained with consideration of a single solution 200,000 times for the 20
constrained problems of this article and it was found that the complexity of the algorithm presented in this article has a
reasonable value.
The breadth of optimization problems, the computational complexities involved, the scientific advances, and the emergence of
more difficult optimization problems make it impossible to define a single method for solving all optimization problems. Therefore,
meta-heuristic methods is widely studied by researchers to provide different optimization methods. The algorithm presented in this
article is a continuation of existing studies in the field of optimization. The performance of the TS algorithm in solving more complex
benchmark problems, compared to other algorithms, can be reviewed and evaluated in future work.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared
to influence the work reported in this paper.
Acknowledgments
The authors would like to show their gratitude to Professor Efren Mezura-Montes, Artificial Intelligence Research Institute,
University of Veracruz, Mexico, who provided comments and expertise that greatly assisted this research. The supports are gratefully
acknowledged.
Appendix A. Nomenclature
Parameter Definition
C(x) constrain values
c1random number between 0 to 1
c2random vector between 0 to 1
c3random number between 0 and 1
c4random number between 0 to 1
c5random vector between 0 and 1
c6random number between 1 and 1
c7random vector between 0 and 1
c8random number between 0 and 1
c9random number between 1 and 1
c10 random vector between 1 and 1
c11 random number between 0 and 1
𝑐12 random number between 0 and 1
𝑐13 random number between 1 and 1
𝑐14 random vector between 1 and 1
𝑐15 random number between 0 and 2
𝑐16 random number between 0 and 1
𝑐17 random vector between 0 and 1
c𝑘a random number (1, 2, 3, or 4)
d𝑖distance between the telescope and the star i
DDifference between the situation under study and the center of the galaxy
f(x) objective functions
f𝐵The fitness corresponding to L𝐵
f𝑆The fitness corresponding to L𝑆
f𝑃The fitness corresponding to L𝑃
g(x) constrain values for the inequality constraints
h(x) constrain values for the equality constraints
Kknowledge
L𝐵Location of the best planet
L𝐸L𝑃in the Exploitation phase
L𝐺𝑎𝑙𝑎𝑥𝑦 center location of the galaxy
L𝑖luminosity of the star i
L𝑚signal
30
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
L𝑁location of the neighbor planet
L𝑃final location of the detected planet
L𝑟Random location
L𝑅Random region
L𝑠location of the stars
L𝑇location of the telescope
L𝑧initial location of the detected planet
mnumber of the equality constraints
M1Location in (Ls) and Fitness in (f𝑠)
M2Signal Properties in (Ls) and Brightness in (f𝑠)
n𝐼𝑡 number of iterations
ns Number of host stars
Prandom power between 1 and (n𝑠*SN)
P𝑇denoted by 1 (probability of transit) and zero (non-transit)
qnumber of the inequality constraints
R𝑖rank of the star i
R𝐿luminance ratio
t1the time to evaluate a single solution 200,000 times in test problem ‘‘i’’.
t2the time of the complete algorithm to carry out 200,000 evaluations in test problem ‘‘i’’
XSolutions
Y𝐺The goal values
Y𝑃the estimated values
SN signal-to-noise ratio
zzone parameter that is equal to 1 or 2
𝛿a small tolerance (1E–04 in this article)
Appendix B. The constrained functions and problems
The constrained problems considered in this article, which is a collection of 13 mathematical and 7 engineering problems, can
be found in this appendix.
Function Subject to
𝐺01=54
𝑖=1 𝑥𝑖− 5 4
𝑖=1 𝑥2
𝑖
13
𝑖=5 𝑥𝑖
𝑔1= 2𝑥1+ 2𝑥2+𝑥10 +𝑥11 − 10 0
𝑔2= 2𝑥1+ 2𝑥3+𝑥10 +𝑥12 − 10 0
𝑔3= 2𝑥2+ 2𝑥3+𝑥11 +𝑥12 − 10 0
𝑔4= −8𝑥1+𝑥10 0
𝑔5= −8𝑥2+𝑥11 0
𝑔6= −8𝑥3+𝑥12 0
𝑔7= −2𝑥4𝑥5+𝑥10 0
𝑔8= −2𝑥6𝑥7+𝑥11 0
𝑔9= −2𝑥8𝑥9+𝑥12 0
𝐺02 =
𝑛
𝑖=1 𝑐𝑜𝑠4(𝑥𝑖)−2𝑛
𝑖=1 𝑐𝑜𝑠2(𝑥𝑖)
𝑛
𝑖=1 𝑖𝑟2
𝑖
𝑔1= 𝑛
𝑖=1 𝑥𝑖+ 0.75 0
𝑔2=𝑛
𝑖=1 𝑥𝑖− 0.75𝑛0
𝐺03 = 𝑛𝑛𝑛
𝑖=1 𝑥𝑖𝑔1=𝑛
𝑖=1 𝑥2
𝑖− 1 = 0
31
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Function Subject to
𝐺04 = 5.3578547𝑥2
3+ 0.8356891𝑥1𝑥5
+37.293239𝑥1− 40792.141
𝑔1=𝑢(𝑥)− 92 0
𝑔2= −𝑢(𝑥)0
𝑔3=𝑣(𝑥)− 110 0
𝑔4= −𝑣(𝑥)+ 90 0
𝑔5=𝑤(𝑥)− 25 0
𝑔6= −𝑤(𝑥)+ 20 0
where:
𝑢(𝑥)= 85.334407 + 0.0056858𝑥2𝑥5+ 0.0006262𝑥1𝑥4− 0.0022053𝑥3𝑥5
𝑣(𝑥)= 80.51249 + 0.0071317𝑥2𝑥5+ 0.0029955𝑥1𝑥2+ 0.0021813𝑥2
3
𝑤(𝑥)= 9.300961 + 0.0047026𝑥3𝑥5+ 0.0012547𝑥1𝑥3+ 0.0019085𝑥3𝑥4
𝐺05 = 3𝑥1+ 10−6𝑥3
1+ 2𝑥2+2 × 10−6
3𝑥3
2𝑔1=𝑥3𝑥4− 0.55 0
𝑔2=𝑥4𝑥3− 0.55 0
𝑔3= 1000 sin 𝑥3− 0.25+ sin(−𝑥4− 0.25)+ 894.8 − 𝑥1= 0
𝑔4= 1000 sin 𝑥3− 0.25+ sin(𝑥3𝑥4− 0.25)+ 894.8 − 𝑥2= 0
𝑔5= 1000 sin 𝑥4− 0.25+ sin(𝑥4𝑥3− 0.25)+ 1294.8=0
𝐺06 = (𝑥1− 10)3+ (𝑥2− 20)3𝑔1= (𝑥1− 5)2+ (𝑥2− 5)2+ 100 0
𝑔2= (𝑥1− 5)2+ (𝑥2− 5)2+ 100 0
𝐺07 = 𝑥2
1+𝑥2
2+𝑥1𝑥214𝑥1−16𝑥2+ (𝑥3
10)2+ 4(𝑥4− 5)2+ (𝑥5− 3)2+ 2(𝑥6− 1)2+
5𝑥2
7+7(𝑥8−11)2+2(𝑥9−10)2+(𝑥107)2+45
𝑔1= 4𝑥1+ 5𝑥2− 3𝑥7+ 9𝑥8− 105 0
𝑔2= 10𝑥1− 8𝑥2− 17𝑥7+ 2𝑥80
𝑔3= −8𝑥1+ 2𝑥2+ 5𝑥9− 2𝑥10 − 12 0
𝑔4= 3(𝑥1− 2)2+ 4(𝑥2− 3)2+ 2𝑥2
3− 7𝑥4− 120 0
𝑔5= 5𝑥12+ 8𝑥2+ (𝑥3− 6)2− 2𝑥4− 40 0
𝑔6= 0.5(𝑥1− 8)2+ 2(𝑥2− 4)2+ 3𝑥52𝑥6− 30 0
𝑔7=𝑥12+ 2(𝑥2− 2)2− 2𝑥1𝑥2+ 14𝑥5− 6𝑥60
𝑔8= −3𝑥1+ 6𝑥2+ 12(𝑥9− 8)2+ 7𝑥10 0
𝐺08 = 𝑠𝑖𝑛32𝜋𝑥1𝑠𝑖𝑛(2𝜋𝑥2)
𝑥3
1(𝑥1𝑥2)𝑔1=𝑥2
1𝑥2+ 1 0
𝑔2= 1 − 𝑥1+ (𝑥2− 4)20
𝐺09 = (𝑥1−10)2+ 5(𝑥2 12)2+𝑥4
3+3(𝑥4
11)2+10𝑥6
5+7𝑥2
6+𝑥4
74𝑥6𝑥7−10𝑥6 8𝑥7
𝑔1= 2𝑥2
1+ 3𝑥4
2+𝑥3+ 4𝑥2
4+ 5𝑥5− 127 0
𝑔2= 7𝑥1+ 3𝑥2+ 10𝑥2
3+𝑥4𝑥5− 282 0
𝑔3= 23𝑥1+𝑥2
2+ 6𝑥2
6− 8𝑥7− 196 0
𝑔4= 4𝑥2
1+𝑥2
2− 3𝑥1𝑥2+ 2𝑥2
3+ 5𝑥6− 11𝑥70
𝐺10 = 𝑥1+𝑥2+𝑥3𝑔1= −1 + 0.0025(𝑥4+𝑥6)0
𝑔2= −1 + 0.0025(−𝑥4+𝑥5+𝑥7)0
𝑔3= −1 + 0.01(−𝑥5+𝑥8)0
𝑔4= 100𝑥1𝑥1𝑥6+ 833.33252𝑥4− 83333.333 0
𝑔5=𝑥2𝑥4𝑥2𝑥7− 1250𝑥4+ 1250𝑥50
𝑔6=𝑥3𝑥5𝑥3𝑥8− 2500𝑥5+ 1250000 0
𝐺11 = 𝑥2
1+ (𝑥2− 1)2𝑔1=𝑥2𝑥2
2= 0
𝐺12 =
1−0.01 (𝑥1− 5)2+ (𝑥2− 5)2+ (𝑥3− 5)2𝑔𝑖,𝑗,𝑘 = (𝑥1𝑖)2+ (𝑥2𝑗)2+ (𝑥3𝑘)2− 0.0625 0𝑖, 𝑗 , 𝑘 = 1,,9
32
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Function Subject to
𝐺13 = 𝑒𝑥1𝑥2𝑥3𝑥4𝑥5𝑔1=𝑥2
1+𝑥2
2+𝑥2
3+𝑥2
4+𝑥2
5− 10 = 0
𝑔2=𝑥2𝑥3− 5𝑥4𝑥5= 0
𝑔3=𝑥3
1+𝑥3
2+ 1 = 0
𝐸1 = 𝜋𝑥2
2𝑥2
1𝑥3𝑥5+ 1𝜌 𝑔1=𝑥2𝑥1𝛥𝑅 0
𝑔2=𝐿𝑚𝑎𝑥 − (𝑥5+ 1)(𝑥3+𝛿)0
𝑔3=𝑝𝑚𝑎𝑥 𝑥4
𝜋(𝑥2
2𝑥2
1)
0
𝑔4=𝑝𝑚𝑎𝑥𝑉𝑠𝑟,𝑚𝑎𝑥 𝜋 𝑛𝑥4𝑅𝑠𝑟
30𝜋(𝑥2
2𝑥2
1)
0
𝑔5=𝑉𝑠𝑟,𝑚𝑎𝑥 𝜋𝑛𝑅𝑠𝑟
30 0
𝑔6=𝑀𝑠𝑀𝑠0
𝑔7=𝑇0
𝑔8=𝑇𝑚𝑎𝑥 𝑇0
where:
𝑅𝑠𝑟 =2
3𝑥3
2𝑥3
1
𝑥2
2𝑥2
1
𝑇=𝐼𝑍𝜔
𝑀+𝑀𝑓
𝑀=2
3𝜇𝑥4𝑥5
𝑥3
2𝑥3
1
𝑥2
2𝑥2
1
𝛥𝑅 = 20 mm
𝐿𝑚𝑎𝑥 = 30 mm
𝜌= 0.00000078 kg∕mm3
𝛿= 0.5
𝑛= 250 r pm
𝑝𝑚𝑎𝑥 = 1 MPa
𝑇𝑚𝑎𝑥 = 15 s
𝜇= 0.5
𝑠= 1.5
𝐼𝑍= 55 kg m2
𝜔=𝜋𝑛
30 rad/s
𝑀𝑆= 40 N m
𝑀𝑓= 3 N m
𝑉𝑠𝑟,𝑚𝑎𝑥 = 10 m∕s
𝐸2 = 0.6224𝑥2𝑥3𝑥4+ 1.7781𝑥1𝑥2
3+
3.1661𝑥2
2𝑥4+ 19.84𝑥2
2𝑥3
𝑔1= −𝑥2− 0.0193𝑥30
𝑔2= −𝑥1+ 0.00954𝑥30
𝑔3= −𝜋𝑥2
3𝑥44
3𝜋𝑥3
3+ 1296000 0
𝑔4=𝑥4− 240 0
𝐸3 − 𝑓𝑐𝑥2∕3
3𝑥1.8
2(𝑓 𝑜𝑟 𝑥225.4)
𝐸3 = −3.647𝑓𝑐𝑥2∕3
3𝑥1.4
2(𝑓 𝑜𝑟 𝑥2>25.4) 𝑔1=𝜙
2𝑠𝑖𝑛−1(𝑥2𝑥1)𝑥3+ 1 0
𝑔2= 2𝑥2𝑥6(𝐷𝑑)0
𝑔3=𝑥7(𝐷𝑑)−2𝑥20
33
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Function Subject to
𝑔4=𝑥10𝑏𝑤𝑥20
𝑔5=𝑥1− 0.5(𝐷+𝑑)0
𝑔6= (0.5 + 𝑥9)(𝐷+𝑑) − 𝑥10
𝑔7= (𝐷𝑥1𝑥2) − 𝑥8𝑥20
𝑔8=𝑥40.515
𝑔9=𝑥50.515
where:
𝑇=𝐷𝑑− 2𝑥2
𝛾=𝑥2
𝑥1
𝑓𝑐= 37.91
1 + 1.04 1 − 𝛾
1 + 𝛾1.72 𝑥4(2𝑥5− 1)
𝑥5(2𝑥4− 1) 0.41 10∕3
−0.3
×𝛾0.3(1 − 𝛾)1.39
(1 + 𝛾)1∕3 2𝑥4
2𝑥4− 1 0.41
𝜙= 2𝜋𝑐𝑜𝑠−1 𝑋
𝑌
𝑋=𝐷𝑑
23𝑇
42
+𝐷
2𝑇
4𝑥22
𝑑
2+𝑇
42
𝑌= 2 𝐷𝑑
23𝑇
4𝐷
2𝑇
4𝑥2
𝐷= 160 mm
𝑑= 90 mm
𝑏𝑤= 30
𝐸4 = 0.785𝑥1𝑥2
2
3.3333𝑥2
3+ 14.9334𝑥3− 43.0934
−1.508𝑥1𝑥2
6+𝑥2
7+ 7.477 𝑥2
6+𝑥3
7+
0.7854(𝑥4𝑥2
6+𝑥5𝑥2
7)
𝑔1=27
𝑥1𝑥2
2𝑥3
− 1 0
𝑔2=397.5
𝑥1𝑥2
2𝑥3
− 1 0
𝑔3=1.93𝑥3
4
𝑥2𝑥4
6𝑥3
− 1 0
𝑔4=1.93𝑥3
5
𝑥2𝑥4
7𝑥3
− 1 0
𝑔5=745𝑥4
𝑥2𝑥32
+ 1.69 × 106
110𝑥3
6
− 1 0
𝑔6=745𝑥5
𝑥2𝑥32
+ 157.5 × 106
85𝑥3
7
− 1 0
𝑔7=𝑥2𝑥3
40 − 1 0
𝑔8=5𝑥2
𝑥1
− 1 0
𝑔9=𝑥1
12𝑥2
− 1 0
𝑔10 =1.5𝑥6+ 1.9
𝑥4
− 1 0
𝑔11 =1.1𝑥7+ 1.9
𝑥5
− 1 0
34
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Function Subject to
𝐸5 = (2 + 𝑥1)𝑥2𝑥2
3𝑔1= 1 −
𝑥2
2𝑥1
7178𝑥4
3
0
𝑔2=4𝑥2
2𝑥3𝑥2
12566𝑥2𝑥3
3𝑥4
3
+1
5108𝑥2
3
− 1 0
𝑔3= 1 140.45𝑥3
𝑥2
2𝑥1
0
𝑔4=𝑥3+𝑥1
1.5− 1 0
𝐸6 = 22𝑥1+𝑥2𝐿 𝑔1=2𝑥1+𝑥2
2𝑥2
1+ 2𝑥1𝑥2
𝐹𝜎0
𝑔2=𝑥2
2𝑥2
1+ 2𝑥1𝑥2
𝐹𝜎0
𝑔3=1
2𝑥2+𝑥1
𝐹𝜎0
where:
𝐿= 100 cm
𝐹= 2 kN/cm2
𝜎= 2 kN/cm2
𝐸7 = 1.10471𝑥2
2𝑥3+ 0.04811𝑥4𝑥1(14 + 𝑥3)𝑔1=𝜏𝜏𝑚𝑎𝑥 0
𝑔2=𝜎𝜎𝑚𝑎𝑥 0
𝑔3=𝑥2𝑥10
𝑔4= 0.125 − 𝑥20
𝑔5=𝛿− 0.25 0
𝑔6=𝐹𝐹𝑐0
where:
𝜏=
𝐹
2𝑥2𝑥32
− 2 𝐹
2𝑥2𝑥3𝑥3
2𝑅
𝐹𝐿+𝑥3
2𝑅
𝐽
+
𝐹𝐿+𝑥3
2𝑅
𝐽
2
𝜎=6𝐹 𝐿
𝑥1𝑥2
4
𝑅=𝑥2
3
4+𝑥2+𝑥4
22
𝛿=4𝐹 𝐿3
𝐸𝑥3
4𝑥1
𝐹𝐶=
4.013𝐸𝑥2
4𝑥6
1
36
𝐿21 − 𝑥4
2𝐿𝐸
4𝐺
𝐽= 2 2𝑥2𝑥3𝑥2
3
12 +𝑥2+𝑥4
22
𝜏𝑚𝑎𝑥 = 13600 psi
𝜎𝑚𝑎𝑥 = 30000 psi
𝐹= 6000𝐼𝑏
𝐿= 14𝑖𝑛
𝐸= 30 × 106psi
𝐺= 12 × 106psi
35
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
Appendix C. Supplementary data
Supplementary material related to this article can be found online at https://doi.org/10.1016/j.rico.2022.100127.
Sup. 1. The Optimization Results
The results for the whole 73 functions including the objective function values (average, standard deviation) and CPU Time
(average, standard deviation) can be found in the supplementary file (Sup.1) attached to this article.
Sup. 2. Wilcoxon rank-sum Results
The results of the Wilcoxon rank-sum can be found in the supplementary file (Sup. 2) attached to this article.
References
[1] Cao B, Wang X, Zhang W, Song H, Lv Z. A many-objective optimization model of industrial internet of things based on private blockchain. IEEE Netw
2020;34(5):78–83.
[2] Menesy AS, Sultan HM, Korashy A, Banakhr FA, Ashmawy MG, Kamel S. Effective parameter extraction of different polymer electrolyte membrane fuel
cell stack models using a modified artificial ecosystem optimization algorithm. IEEE Access 2020;8:31892–909.
[3] Tajalli M, Mehrabipour M, Hajbabaie A. Network-level coordinated speed optimization and traffic light control for connected and automated vehicles.
IEEE Trans Intell Transp Syst 2020.
[4] Zhao M-M, Wu Q, Zhao M-J, Zhang R. Intelligent reflecting surface enhanced wireless network: Two-timescale beamforming optimization. IEEE Trans
Wireless Commun 2020.
[5] Hu Y, Zhang Y, Gong D. Multiobjective particle swarm optimization for feature selection with fuzzy cost. IEEE Trans Cybern 2020.
[6] Abualigah L, Diabat A. A novel hybrid antlion optimization algorithm for multi-objective task scheduling problems in cloud computing environments.
Cluster Comput 2020;1–19.
[7] Injeti SK, Thunuguntla VK. Optimal integration of DGs into radial distribution network in the presence of plug-in electric vehicles to minimize daily active
power losses and to improve the voltage profile of the system using bio-inspired optimization algorithms. Prot Control Modern Power Syst 2020;5(1):1–15.
[8] Mirrashid M, Naderpour H. Innovative computational intelligence-based model for vulnerability assessment of RC frames subject to seismic sequence. J
Struct Eng 2021;147(3):04020350.
[9] Merkel M, Gangl P, Schops S. Shape optimization of rotating electric machines using isogeometric analysis. IEEE Trans Energy Convers 2021.
[10] Ghoneim SS, Mahmoud K, Lehtonen M, Darwish MM. Enhancing diagnostic accuracy of transformer faults using teaching-learning-based optimization. IEEE
Access 2021;9:30817–32.
[11] Yigit S. A machine-learning-based method for thermal design optimization of residential buildings in highly urbanized areas of Turkey. J Build Eng
2021;38:102225.
[12] Zhao D, et al. Chaotic random spare ant colony optimization for multi-threshold image segmentation of 2D kapur entropy. Knowl-Based Syst
2021;216:106510.
[13] Shaheen A, Elsayed A, El-Sehiemy RA, Abdelaziz AY. Equilibrium optimization algorithm for network reconfiguration and distributed generation allocation
in power systems. Appl Soft Comput 2021;98:106867.
[14] Naderpour H, Mirrashid M. Proposed soft computing models for moment capacity prediction of reinforced concrete columns. Soft Comput
2020;24(15):11715–29.
[15] Naderpour H, Mirrashid M. Estimating the compressive strength of eco-friendly concrete incorporating recycled coarse aggregate using neuro-fuzzy approach.
J Cleaner Prod 2020;265:121886.
[16] Naderpour H, Mirrashid M. Bio-inspired predictive models for shear strength of reinforced concrete beams having steel stirrups. Soft Comput
2020;24(16):12587–97.
[17] Mirrashid M, Naderpour H. Recent trends in prediction of concrete elements behavior using soft computing (2010–2020). Arch Comput Methods Eng
2021;28(4):3307–27.
[18] Naderpour H, Mirrashid M, Parsa P. Failure mode prediction of reinforced concrete columns using machine learning methods. Eng Struct 2021;248:113263.
[19] Naderpour H, Parsa P, Mirrashid M. Innovative approach for moment capacity estimation of spirally reinforced concrete columns using swarm
intelligence–based algorithms and neural network. Pract Period Struct Des Constr 2021;26(4):04021043.
[20] Mirrashid M, Naderpour H. Computational intelligence-based models for estimating the fundamental period of infilled reinforced concrete frames. J Build
Eng 2022;46:103456.
[21] Yang X-S, Deb S. Cuckoo search via Lévy flights. In: 2009 world congress on nature & biologically inspired computing (NaBIC). IEEE; 2009, p. 210–4.
[22] Storn R, Price K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 1997;11(4):341–59.
[23] Coello CAC. A comprehensive survey of evolutionary-based multiobjective optimization techniques. Knowl Inf Syst 1999;1(3):269–308.
[24] Gao S, Vairappan C, Wang Y, Cao Q, Tang Z. Gravitational search algorithm combined with chaos for unconstrained numerical optimization. Appl Math
Comput 2014;231:48–62.
[25] Yu Y, Gao S, Cheng S, Wang Y, Song S, Yuan F. CBSO: a memetic brain storm optimization with chaotic local search. Memet Comput 2018;10(4):353–67.
[26] Gao S, Yu Y, Wang Y, Wang J, Cheng J, Zhou M. Chaotic local search-based differential evolution algorithms for optimization. IEEE Trans Syst Man
Cybern 2019;51(6):3954–67.
[27] Ji J, Song S, Tang C, Gao S, Tang Z, Todo Y. An artificial bee colony algorithm search guided by scale-free networks. Inform Sci 2019;473:142–65.
[28] Wang Y, Yu Y, Gao S, Pan H, Yang G. A hierarchical gravitational search algorithm with an effective gravitational constant. Swarm Evol Comput
2019;46:118–39.
[29] Abbaszadeh Sori A, Ebrahimnejad A, Motameni H. Elite artificial bees’ colony algorithm to solve robot’s fuzzy constrained routing problem. Comput Intell
2020;36(2):659–81.
[30] Lei Z, Gao S, Gupta S, Cheng J, Yang G. An aggregative learning gravitational search algorithm with self-adaptive gravitational constants. Expert Syst
Appl 2020;152:113396.
[31] Wang X, Zhang L, Wang G, Wang Q, He G. Modeling of relative collision risk based on the ships group situation. J Intell Fuzzy Systems 2021;(Preprint):1–14.
[32] Pirozmand P, Alrezaamiri H, Ebrahimnejad A, Motameni H. A new model of parallel particle swarm optimization algorithm for solving numerical problems.
Malaysian J Comput Sci 2021;34(4):389–407.
[33] Braik M, Ryalat MH, Al-Zoubi H. A novel meta-heuristic algorithm for solving numerical optimization problems: Ali Baba and the forty thieves. Neural
Comput Appl 2022;34(1):409–55.
[34] Di Caprio D, Ebrahimnejad A, Alrezaamiri H, Santos-Arteaga FJ. A novel ant colony algorithm for solving shortest path problems with fuzzy arc weights.
Alexandria Eng J 2022;61(5):3403–15.
36
M. Mirrashid and H. Naderpour Results in Control and Optimization 7 (2022) 100127
[35] Ebrahimnejad A, Karimnejad Z, Alrezaamiri H. Particle swarm optimisation algorithm for solving shortest path problems with mixed fuzzy arc weights.
Int J Appl Decis Sci 2015;8(2):203–22.
[36] Ebrahimnejad A, Tavana M, Alrezaamiri H. A novel artificial bee colony algorithm for shortest path problems with fuzzy arc weights. Measurement
2016;93:48–56.
[37] Alrezaamiri H, Ebrahimnejad A, Motameni H. Software requirement optimization using a fuzzy artificial chemical reaction optimization algorithm. Soft
Comput 2019;23(20):9979–94.
[38] Alrezaamiri H, Ebrahimnejad A, Motameni H. Parallel multi-objective artificial bee colony algorithm for software requirement optimization. Requir Eng
2020;25(3):363–80.
[39] Pirozmand P, Ebrahimnejad A, Alrezaamiri H, Motameni H. A novel approach for the next software release using a binary artificial algae algorithm. J
Intell Fuzzy Systems 2021;40(3):5027–41.
[40] Camacho Villalón CL, Stützle T, Dorigo M. Grey wolf, firefly and bat algorithms: Three widespread algorithms that do not contain any novelty. In:
International conference on swarm intelligence. Springer; 2020, p. 121–33.
[41] Dorigo M. Swarm intelligence: A few things you need to know if you want to publish in this journal. Swarm Intell 2016.
[42] García-Martínez C, Gutiérrez PD, Molina D, Lozano M, Herrera F. Since CEC 2005 competition on real-parameter optimisation: a decade of research,
progress and comparative analysis’s weakness. Soft Comput 2017;21(19):5573–83.
[43] Sörensen K. Metaheuristics—the metaphor exposed. Int Trans Oper Res 2015;22(1):3–18.
[44] Villalón C, Stützle T, Dorigo M. Cuckoo search (𝜇+𝜆)–evolution strategy. IRIDIA–technical report series, 2021.
[45] Wolpert DH, Macready WG. No free lunch theorems for optimization. IEEE Trans Evol Comput 1997;1(1):67–82.
[46] NASA. Exoplanet exploration: 5 ways to find a planet. The National Aeronautics and Space Administration (NASA), https://exoplanets.nasa.gov/alien-
worlds/ways-to- find-a- planet/ (accessed).
[47] Hahn V. Artistic representation of a star. 2019, ed. https://commons.wikimedia.org/wiki/File:White_Star_1.png: Victor Hahn, CC 8 BY-SA 3.0 <https:
//creativecommons.org/licenses/by-sa/3.0>, via Wikimedia Commons.
[48] Budassi PC. Artist’s conception of the Milky way galaxy. 2020, ed. https://commons.wikimedia.org/wiki/File:Milky_way.png: Pablo Carlos Budassi, CC
BY-SA 4.0 <https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons.
[49] Haswell CA. Transiting exoplanets: Measuring the properties of planetary systems. Cambridge University Press; 2010.
[50] Jaschek C, Jaschek M. The classification of stars. Cambridge University Press; 1990, p. 430.
[51] Gray R, Corbally C. Princeton series in astrophysics: stellar spectral classification. Princeton University Press; 2009, p. 592.
[52] Aleti A, Moser I. A systematic literature review of adaptive parameter control methods for evolutionary algorithms. ACM Comput Surv 2016;49(3):1–35.
[53] Kennedy J, Eberhart R. Particle swarm optimization. In: Proceedings of ICNN’95-international conference on neural networks, Vol. 4. IEEE; 1995, p.
1942–8.
[54] Geem ZW, Kim JH, Loganathan GV. A new heuristic optimization algorithm: harmony search. Simulation 2001;76(2):60–8.
[55] Atashpaz-Gargari E, Lucas C. Imperialist competitive algorithm: an algorithm for optimization inspired by imperialistic competition. In: 2007 IEEE congress
on evolutionary computation. IEEE; 2007, p. 4661–7.
[56] Karaboga D, Basturk B. On the performance of artificial bee colony (ABC) algorithm. Appl Soft Comput 2008;8(1):687–97.
[57] Yang X-S. A new metaheuristic bat-inspired algorithm. In: Nature inspired cooperative strategies for optimization (NICSO 2010). Springer; 2010, p. 65–74.
[58] Rao RV, Savsani VJ, Vakharia D. Teaching–learning-based optimization: a novel method for constrained mechanical design optimization problems. Comput
Aided Des 2011;43(3):303–15.
[59] Mirjalili S, Mirjalili SM, Lewis A. Grey wolf optimizer. Adv Eng Softw 2014;69:46–61.
[60] Mirjalili S, Lewis A. The whale optimization algorithm. Adv Eng Softw 2016;95:51–67.
[61] Mirjalili S, Gandomi AH, Mirjalili SZ, Saremi S, Faris H, Mirjalili SM. Salp swarm algorithm: A bio-inspired optimizer for engineering design problems.
Adv Eng Softw 2017;114:163–91.
[62] Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H. Harris hawks optimization: Algorithm and applications. Future Gener Comput Syst
2019;97:849–72.
[63] Surjanovic S, Bingham D. Virtual Library of Simulation Experiments: Test Functions and Datasets. http://www.sfu.ca/ssurjano (accessed).
[64] Abdullah JM, Ahmed T. Fitness dependent optimizer: inspired by the bee swarming reproductive process. IEEE Access 2019;7:43473–86.
[65] Price K, Awad N, Ali M, Suganthan P. The 100-digit challenge: problem definitions and evaluation criteria for the 100-digit challenge special session and
competition on single objective numerical optimization. Nanyang Technological University; 2018.
[66] Arora JS. Introduction to optimum design. 4th ed.. Elsevier; 2016, p. 968.
[67] Coello CAC. Use of a self-adaptive penalty approach for engineering optimization problems. Comput Ind 2000;41(2):113–27.
[68] Deb K. An efficient constraint handling method for genetic algorithms. Comput Methods Appl Mech Engrg 2000;186(2–4):311–38.
[69] Mezura-Montes E, Coello CAC. Constraint-handling in nature-inspired numerical optimization: past, present and future. Swarm Evol Comput
2011;1(4):173–94.
[70] Mezura-Montes E, Coello CAC, Tun-Morales EI. Simple feasibility rules and differential evolution for constrained optimization. In: Mexican international
conference on artificial intelligence. Springer; 2004, p. 707–16.
[71] Rao RV, Waghmare G. A new optimization algorithm for solving complex constrained design optimization problems. Eng Optim 2017;49(1):60–83.
[72] Kannan B, Kramer SN. An augmented Lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to
mechanical design. J Mech Des 1994;116(2):405–11.
[73] Wan C, Changsen W. Analysis of rolling element bearings. (Mechanical Engineering Publications Ltd). Wiley-Blackwell; 1991.
[74] Arora J. Introduction to optimum design. McGraw-Hill; 1989.
[75] Ray T, Saini P. Engineering design optimization using a swarm with an intelligent information sharing among individuals. Eng Optim 2001;33(6):735–48.
[76] Ragsdell K, Phillips D. Optimal design of a class of welded structures using geometric programming. J Eng Ind 1976;98(3):1021–5.
[77] Kumar A, Wu G, Ali MZ, Mallipeddi R, Suganthan PN, Das S. A test-suite of non-convex constrained optimization problems from the real-world and some
baseline results. Swarm Evol Comput 2020;56:100693.
[78] Črepinšek M, Liu S-H, Mernik M. Exploration and exploitation in evolutionary algorithms: A survey. ACM Comput Surv 2013;45(3):1–33.
[79] Morales-Castañeda B, Zaldivar D, Cuevas E, Fausto F, Rodríguez A. A better balance in metaheuristic algorithms: Does it exist? Swarm Evol Comput
2020;54:100671.
37
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
In software incremental development methodology, the product develops in several releases. In each release, one set of the requirements is suggested for development. The development team must select a subset of the proposed requirements for development in the next release such that by consideration the limitation of the problem provides the highest satisfaction to the customers and the lowest cost to the company. This problem is known as the next release problem. In complex projects where the number of requirements is high, development teams cannot choose an optimized subset of the requirements by traditional methods, so an intelligent algorithm is required to help in the decision-making process. The main contributions of this study are fivefold: (1) The customer satisfaction and the cost of every requirement are determined by use of fuzzy numbers because of the possible changing of the customers' priorities during the product development period; (2) An improved approximate approach is suggested for summing fuzzy numbers of different kinds, (3) A new metaheuristic algorithm namely the Binary Artificial Algae Algorithm is used for choosing an optimized subset of requirements, (4) Experiments performed on two fuzzy datasets confirm that the resulted subsets from the suggested algorithm are free of human mistake and can be a great guidance to development teams in making decisions.
Article
Full-text available
This paper presents a novel meta-heuristic algorithm called Ali Baba and the forty thieves (AFT) for solving global optimization problems. Recall the famous tale of Ali Baba and the forty thieves, where Ali Baba once saw a gang of forty thieves enter a strange cave filled with all kinds of treasures. The strategies pursued by the forty thieves in the search for Ali Baba inspired us to design ideas and underlie the basic concepts to put forward the mathematical models and implement the exploration and exploitation processes of the proposed algorithm. The performance of the AFT algorithm was assessed on a set of basic benchmark test functions and two more challenging benchmarks called IEEE CEC-2017 and IEEE CEC-C06 2019 benchmark test functions. These benchmarks cover simple and complex test functions with various dimensions and levels of complexity. An extensive comparative study was performed between the AFT algorithm and other well-studied algorithms, and the significance of the results was proved by statistical test methods. To study the potential performance of AFT, its further development is discussed and carried out from five aspects. Finally, the applicability of the AFT algorithm was subsequently demonstrated in solving five engineering design problems. The results in both benchmark functions and engineering problems show that the AFT algorithm has stronger performance than other competitors’ algorithms.
Article
Full-text available
This work deals with shape optimization of electric machines using isogeometric analysis. Isogeometric analysis is particularly well suited for shape optimization as it allows to easily modify the geometry without remeshing the domain. A 6-pole interior permanent magnet synchronous machine (IPMSM) is modeled using a multipatch isogeometric approach and rotation of the machine is realized by modeling the stator and rotor domain separately and coupling them at the interface using harmonic basis functions. Shape sensitivity analysis is used to find the shape derivative of the optimization goal functional, i.e., the total harmonic distortion of the electromotive force. This method allows for a freeform shape optimization which is not restricted by a choice of a set of optimization parameters. This freeform shape optimization is applied for the first time to an isogeometric model of an IPMSM minimizing the total harmonic distortion of the electromotive force as a goal functional, where a reduction of 75 % is achieved.
Article
Full-text available
The early detection of the transformer faults with high accuracy rates guarantees the continuous operation of the power system networks. Dissolved gas analysis (DGA) is a technique that is used to detect or diagnose the transformer faults based on the dissolved gases due to the electrical and thermal stresses influencing the insulating oil. Many attempts are accomplished to discover an appropriate technique to correctly diagnose the transformer fault types, such as the Duval Triangle method, Rogers’ ratios method, and IEC standard 60599. In addition, several artificial intelligence, classification, and optimization techniques are merged with the previous methods to enhance their diagnostic accuracy. In this paper, a novel approach is proposed to enhance the diagnostic accuracy of the transformer faults based on introducing new gas concentration percentages limits and gases’ ratios that help to separate the conflict between the diverse transformer faults. To do so, an optimization model is established which simultaneously optimizes both gas concentration percentages and ratios so as to maximize the agreement of the diagnostic faults with respect to the actual ones achieving the high diagnostic accuracy of the transformer faults. Accordingly, an efficient teaching-learning based optimization (TLBO) is developed to accurately solve the optimization model considering training datasets (Egyptian chemical laboratory and literature). The proposed TLBO algorithm enhances diagnostic accuracy at a significant level, which is higher than some of the other DGA techniques that were presented in the literature. The robustness of the proposed optimization-based approach is confirmed against uncertainty in measurement where its accuracy is not affected by the uncertainty rates. To prove the efficacy of the proposed approach, it is compared with five existing approaches using an out-of-sample dataset where a superior agreement rate is reached for the different fault types.
Article
The collision risk of ships is a fuzzy concept, which is the measurement of the likelihood of a collision between ships. Most of existed studies on the risk of multi-ship collision are based on the assessment of two-ship collision risk, and collision risk between the target ship and each interfering ship is calculated respectively, to determine the key avoidance ship. This method is far from the actual situation and has some defects. In open waters, it is of certain reference value when there are fewer ships, but in busy waters, it cannot well represent the risk degree of the target ship, since it lacks the assessment of the overall risk of the perceived area of the target ship. Based on analysis of complexity of ships group situation, the concept of relative domain was put forward and the model was constructed. On this basis, the relative collision risk was proposed, and the corresponding model was obtained, so as to realize risk assessment. Through the combination of real ship and simulation experiments, the variation trend, stability and sensitivity of the model were verified. The results showed that risk degree of the environment of ships in open and busy waters could be well assessed, and good references for decision-making process of ships collision avoidance could be provided.
Article
One of the most important parameters used in the frame design process is the fundamental period. Numerous relationships are provided in the regulations and articles to determine the above parameter. However, due to the lack of optimal accuracy of existing equations, researchers are trying to improve them and increase the computational accuracy of the period. In this article, two computational intelligence-based is provided to determine the period of the infilled reinforced concrete frames. In the first method, the period is determined by a relationship that was extracted from the structure of a trained neural network and statistical regression analyses. The second method is a computational framework based on the neuro-fuzzy approach, which can be used to determine the period. In both models presented in this article, five variables including the number of stories and spans, span length, opening ratio, masonry wall stiffness were used. Furthermore, a large set of analytical data has been considered to learn the models. In order to evaluate the accuracy of the proposed model, a comparative study has been performed and its results were also presented. It was shown that both formulations proposed in this article have good performance and accuracy in estimating the fundamental period of infilled reinforced concrete frames. Furthermore, the sensitivity analyses indicated that the opening ratio is the most effective parameter on the period of the considered frames.
Article
In this article, new efficient methods are presented to classify failure modes in reinforced concrete columns. For this purpose, machine learning techniques were utilized with consideration of laboratory datasets collected from the literature. Two different approaches, including decision tree and artificial neural network, have been studied to determine the failure mode of the columns. The variables used to estimate the failure mode were compressive strength of the concrete, span-to-depth ratio, axial load ratio, longitudinal reinforcement ratio, volumetric transverse reinforcement ratio, yield stress of longitudinal reinforcement, and yield stress of transverse reinforcement. A comparison study between the two introduced models indicated that the proposed decision tree provides a desirable accuracy and could specify the failure mode, with no need to a complex calculation. The proposed model has many applications in structural engineering such as seismic evaluation, retrofitting, and rehabilitation as a suitable tool for estimating the failure modes in reinforced concrete columns.
Article
The shortest path (SP) problem constitutes one of the most prominent topics in graph theory and has practical applications in many research areas such as transportation, network communications, emergency services, and fire stations services, to name just a few. In most real-world applications, the arc weights of the corresponding SP problems are represented by fuzzy numbers. The current paper presents a fuzzy-based Ant Colony Optimization (ACO) algorithm for solving shortest path problems with different types of fuzzy weights. The weights of the fuzzy paths involving different kinds of fuzzy arcs are approximated using the -cut method. In addition, a signed distance function is used to compare the fuzzy weights of paths. The proposed algorithm is implemented on three increasingly complex numerical examples and the results obtained compared with those derived from a genetic algorithm (GA), a particle swarm optimization (PSO) algorithm and an artificial bee colony (ABC) algorithm. The results confirm that the fuzzy-based enhanced ACO algorithm could converge in about 50% less time than the alternative metaheuristic algorithms.
The purpose of this paper is to present an innovative equation to predict the moment capacity of spirally reinforced concrete columns with high accuracy using a combination of neural network and metaheuristic optimization algorithms. To this end, a large experimental database has been gathered to train a neural network with seven independent parameters that deal with the dimensional properties of the columns, reinforcements, materials, and also the forces. Furthermore, the authors improved the process of training with consideration of two optimization techniques: particle swarm optimization (PSO) and Harris hawks optimization (HHO). Then, the best model was selected to a statistical methodology to extract an empirical equation to predict the target, which makes the proposed system of this article more applicable, especially for the practical usages. The results indicated that the neural network with the PSO algorithm had better results than the other model. Also, it has been found that the proposed formulation could predict the moment capacity of the considered element with high performance. The presented equation of this article has many applications in civil engineering, such as retrofitting and rehabilitation.
Article
In 2008, the Turkish government legislated a new regulation mandating the retrofit of existing buildings and construction of energy-efficient buildings. Owing to the requirements of the legislated regulations and environmental concerns, constructing energy-efficient buildings has become a necessity. Researchers working in this field have proposed various optimization techniques to determine energy-optimal design solutions. However, the proposed techniques require a large number of simulations thereby reducing the practicality and inhibiting the wider diffusion of the technique into the industry. Recent studies have demonstrated that the use of surrogate models as alternatives to commercial energy simulation tools significantly reduces the time required for optimization, and a large number of buildings can be optimized in a shorter period. In this study, we aimed at developing a surrogate model-based integrated optimization system to obtain energy-optimal thermal designs for residential buildings in the most urbanized cities in Turkey under different levels of budget constraints. For this purpose, an integrated system consisting of a genetic algorithm optimization technique and gradient boosting machine based surrogate model was developed. The results indicate that the proposed surrogate model-based optimization system is an effective method of overcoming the difficulties of simulation-based optimization methods and it can be integrated into the daily workflow of designers. This study reveals the performance of the scarcely used gradient boosting machine based surrogate model in predicting the thermal loads of buildings. The outcomes of this research will guide designers in finding energy-optimal design alternatives for residential buildings.