ArticlePDF Available

Abstract and Figures

In this paper, a novel population-based, nature-inspired optimization paradigm is proposed, which is called Harris Hawks Optimizer (HHO). The main inspiration of HHO is the cooperative behavior and chasing style of Harris’ hawks in nature called surprise pounce. In this intelligent strategy, several hawks cooperatively pounce a prey from different directions in an attempt to surprise it. Harris hawks can reveal a variety of chasing patterns based on the dynamic nature of scenarios and escaping patterns of the prey. This work mathematically mimics such dynamic patterns and behaviors to develop an optimization algorithm. The effectiveness of the proposed HHO optimizer is checked, through a comparison with other nature-inspired techniques, on 29 benchmark problems and several real-world engineering problems. The statistical results and comparisons show that the HHO algorithm provides very promising and occasionally competitive results compared to well-established metaheuristic techniques. Source codes of HHO and related supplementary materials are publicly available at http://www.aliasgharheidari.com/HHO.html and and http://www.evo-ml.com/2019/03/02/hho.
Content may be subject to copyright.
FUTURE: 4781 Model 5G pp. 1–23 (col. fig: NIL)
Please cite this article as: A.A. Heidari, S. Mirjalili, H. Faris et al., Harris hawks optimization: Algorithm and applications, Future Generation Computer Systems (2019),
https://doi.org/10.1016/j.future.2019.02.028.
Future Generation Computer Systems xxx (xxxx) xxx
Contents lists available at ScienceDirect
Future Generation Computer Systems
journal homepage: www.elsevier.com/locate/fgcs
Harris hawks optimization: Algorithm and applications
,Seyedali Mirjalili b,Hossam Faris c,Ibrahim Aljarah c,Majdi Mafarja d,
Huiling Chen e,
aSchool of Surveying and Geospatial Engineering, University of Tehran, Tehran, Iran
bSchool of Information and Communication Technology, Griffith University, Nathan, Brisbane, QLD 4111, Australia
cKing Abdullah II School for Information Technology, The University of Jordan, Amman, Jordan
dDepartment of Computer Science, Birzeit University, POBox 14, West Bank, Palestine
eDepartment of Computer Science, Wenzhou University, Wenzhou 325035, China
highlights
A mathematical model is proposed to simulate the hunting behavior of Harris’ Hawks.
An optimization algorithm is proposed using the mathematical model.
The proposed HHO algorithm is tested on several benchmarks.
The performance of HHO is also examined on several engineering design problems.
The results show the merits of the HHO algorithm as compared to the existing algorithms.
article info
Article history:
Received 2 June 2018
Received in revised form 29 December 2018
Accepted 18 February 2019
Available online xxxx
Keywords:
Nature-inspired computing
Harris hawks optimization algorithm
Swarm intelligence
Optimization
Metaheuristic
abstract
In this paper, a novel population-based, nature-inspired optimization paradigm is proposed, which
is called Harris Hawks Optimizer (HHO). The main inspiration of HHO is the cooperative behavior
and chasing style of Harris’ hawks in nature called surprise pounce. In this intelligent strategy,
several hawks cooperatively pounce a prey from different directions in an attempt to surprise it.
Harris hawks can reveal a variety of chasing patterns based on the dynamic nature of scenarios
and escaping patterns of the prey. This work mathematically mimics such dynamic patterns and
behaviors to develop an optimization algorithm. The effectiveness of the proposed HHO optimizer is
checked, through a comparison with other nature-inspired techniques, on 29 benchmark problems and
several real-world engineering problems. The statistical results and comparisons show that the HHO
algorithm provides very promising and occasionally competitive results compared to well-established
metaheuristic techniques.
©2019 Published by Elsevier B.V.
1. Introduction1
Many real-world problems in machine learning and artificial2
intelligence have generally a continuous, discrete, constrained or3
unconstrained nature [1,2]. Due to these characteristics, it is hard4
to tackle some classes of problems using conventional mathe-5
matical programming approaches such as conjugate gradient, se-6
quential quadratic programming, fast steepest, and quasi-Newton7
methods [3,4]. Several types of research have verified that these8
methods are not efficient enough or always efficient in dealing9
with many larger-scale real-world multimodal, non-continuous,10
Corresponding author.
E-mail addresses: as_heidari@ut.ac.ir (A.A. Heidari),
seyedali.mirjalili@griffithuni.edu.au (S. Mirjalili), hossam.faris@ju.edu.jo
(H. Faris), i.aljarah@ju.edu.jo (I. Aljarah), mmafarja@birzeit.edu (M. Mafarja),
chenhuiling.jlu@gmail.com (H. Chen).
and non-differentiable problems [5]. Accordingly, metaheuristic 11
algorithms have been designed and utilized for tackling many 12
problems as competitive alternative solvers, which is because of 13
their simplicity and easy implementation process. In addition, 14
the core operations of these methods do not rely on gradient 15
information of the objective landscape or its mathematical traits. 16
However, the common shortcoming for the majority of meta- 17
heuristic algorithms is that they often show a delicate sensitivity 18
to the tuning of user-defined parameters. Another drawback is 19
that the metaheuristic algorithms may not always converge to the 20
global optimum. [6]21
In general, metaheuristic algorithms have two types [7]; single 22
solution based (i.g. Simulated Annealing (SA) [8]) and population- 23
based (i.g. Genetic Algorithm (GA) [9]). As the name indicates, 24
in the former type, only one solution is processed during the 25
optimization phase, while in the latter type, a set of solutions 26
(i.e. population) are evolved in each iteration of the optimization 27
https://doi.org/10.1016/j.future.2019.02.028
0167-739X/©2019 Published by Elsevier B.V.
Ali Asghar Heidari
Department of Computer Science, School of Computing, National University of Singapore, Singapore
e-Mail: as_heidari@ut.ac.ir, aliasghar68@gmail.com
e-Mail (Singapore): aliasgha@comp.nus.edu.sg, t0917038@u.nus.edu
Source codes of HHO are publicly available at (1) http://www.alimirjalili.com/HHO.html
(2) http://www.evo-ml.com/2019/03/02/hho
FUTURE: 4781
Please cite this article as: A.A. Heidari, S. Mirjalili, H. Faris et al., Harris hawks optimization: Algorithm and applications, Future Generation Computer Systems (2019),
https://doi.org/10.1016/j.future.2019.02.028.
2A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems xxx (xxxx) xxx
Fig. 1. Classification of meta-heuristic techniques (meta-heuristic diamond).
process. Population-based techniques can often find an optimal or1
suboptimal solution that may be same with the exact optimum2
or located in its neighborhood. Population-based metaheuris-3
tic (P-metaheuristics) techniques mostly mimic natural phenom-4
ena [1013]. These algorithms start the optimization process5
by generating a set (population) of individuals, where each in-6
dividual in the population represents a candidate solution to7
the optimization problem. The population will be evolved itera-8
tively by replacing the current population with a newly generated9
population using some often stochastic operators [14,15]. The op-10
timization process is proceeded until satisfying a stopping criteria11
(i.e. maximum number of iterations) [16,17].12
Based on the inspiration, P-metaheuristics can be categorized13
in four main groups [18,19] (see Fig. 1): Evolutionary Algorithms14
(EAs), Physics-based, Human-based, and Swarm Intelligence (SI)15
algorithms. EAs mimic the biological evolutionary behaviors such16
as recombination, mutation, and selection. The most popular EA is17
the GA that mimics the Darwinian theory of evolution [20]. Other18
popular examples of EAs are Differential Evolution (DE) [21],19
Genetic Programming (GP) [20], and Biogeography-Based Opti-20
mizer (BBO) [22]. Physics-based algorithms are inspired by the21
physical laws. Some examples of these algorithms are Big-Bang22
Big-Crunch (BBBC) [23], Central Force Optimization (CFO) [24],23
and Gravitational Search Algorithm (GSA) [25]. Salcedo-Sanz [26]24
has deeply reviewed several physic-based optimizers. The third25
category of P-metaheuristics includes the set of algorithms that26
mimic some human behaviors. Some examples of the human-27
based algorithms are Tabu Search (TS) [27], Socio Evolution and28
Learning Optimization (SELO) [28], and Teaching Learning Based29
Optimization (TLBO) [29]. As the last class of P-metaheuristics,30
SI algorithms mimic the social behaviors (e.g. decentralized, self-31
organized systems) of organisms living in swarms, flocks, or32
herds [30,31]. For instance, the birds flocking behaviors is the33
main inspiration of the Particle Swarm Optimization (PSO) pro-34
posed by Eberhart and Kennedy [32]. In PSO, each particle in35
the swarm represents a candidate solution to the optimization36
problem. In the optimization process, each particle is updated37
with regard to the position of the global best particle and its38
own (local) best position. Ant Colony Optimization (ACO) [33],39
Cuckoo Search (CS) [34], and Artificial Bee Colony (ABC) are other40
examples of the SI techniques.41
Regardless of the variety of these algorithms, there is a com-42
mon feature: the searching steps have two phases: exploration43
(diversification) and exploitation (intensification) [26]. In the ex- 44
ploration phase, the algorithm should utilize and promote its 45
randomized operators as much as possible to deeply explore 46
various regions and sides of the feature space. Hence, the ex- 47
ploratory behaviors of a well-designed optimizer should have 48
an enriched-enough random nature to efficiently allocate more 49
randomly-generated solutions to different areas of the problem 50
topography during early steps of the searching process [35]. The 51
exploitation stage is normally performed after the exploration 52
phase. In this phase, the optimizer tries to focus on the neigh- 53
borhood of better-quality solutions located inside the feature 54
space. It actually intensifies the searching process in a local region 55
instead of all-inclusive regions of the landscape. A well-organized 56
optimizer should be capable of making a reasonable, fine balance 57
between the exploration and exploitation tendencies. Otherwise, 58
the possibility of being trapped in local optima (LO) and immature 59
convergence drawbacks increases. 60
We have witnessed a growing interest and awareness in the 61
successful, inexpensive, efficient application of EAs and SI al- 62
gorithms in recent years. However, referring to No Free Lunch 63
(NFL) theorem [36], all optimization algorithms proposed so- 64
far show an equivalent performance on average if we apply 65
them to all possible optimization tasks. According to NFL theo- 66
rem, we cannot theoretically consider an algorithm as a general- 67
purpose universally-best optimizer. Hence, NFL theorem encour- 68
ages searching for developing more efficient optimizers. As a 69
result of NFL theorem, besides the widespread studies on the 70
efficacy, performance aspects and results of traditional EAs and SI 71
algorithms, new optimizers with specific global and local search- 72
ing strategies are emerging in recent years to provide more 73
variety of choices for researchers and experts in different fields. 74
In this paper, a new nature-inspired optimization technique 75
is proposed to compete with other optimizers. The main idea 76
behind the proposed optimizer is inspired from the cooperative 77
behaviors of one of the most intelligent birds, Harris’ Hawks, in 78
hunting escaping preys (rabbits in most cases) [37]. For this pur- 79
pose, a new mathematical model is developed in this paper. Then, 80
a stochastic metaheuristic is designed based on the proposed 81
mathematical model to tackle various optimization problems. 82
The rest of this research is organized as follows. Section 283
represents the background inspiration and info about the coop- 84
erative life of Harris’ hawks. Section 3represents the mathemat- 85
ical model and computational procedures of the HHO algorithm. 86
The results of HHO in solving different benchmark and real- 87
world case studies are presented in Section 4Finally, Section 688
concludes the work with some useful perspectives. 89
2. Background 90
In 1997, Louis Lefebvre proposed an approach to measure 91
the avian ‘‘IQ’’ based on the observed innovations in feeding 92
behaviors [38]. Based on his studies [3841], the hawks can be 93
listed amongst the most intelligent birds in nature. The Harris’ 94
hawk (Parabuteo unicinctus) is a well-known bird of prey that 95
survives in somewhat steady groups found in southern half of 96
Arizona, USA [37]. Harmonized foraging involving several animals 97
for catching and then, sharing the slain animal has been persua- 98
sively observed for only particular mammalian carnivores. The 99
Harris’s hawk is distinguished because of its unique cooperative 100
foraging activities together with other family members living 101
in the same stable group while other raptors usually attack to 102
discover and catch a quarry, alone. This avian desert predator 103
shows evolved innovative team chasing capabilities in tracing, 104
encircling, flushing out, and eventually attacking the potential 105
quarry. These smart birds can organize dinner parties consisting 106
of several individuals in the non-breeding season. They are known 107
FUTURE: 4781
Please cite this article as: A.A. Heidari, S. Mirjalili, H. Faris et al., Harris hawks optimization: Algorithm and applications, Future Generation Computer Systems (2019),
https://doi.org/10.1016/j.future.2019.02.028.
A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems xxx (xxxx) xxx 3
Fig. 2. Harris’s hawk and their behaviors.2
as truly cooperative predators in the raptor realm. As reported1
by Bednarz [37] in 1998, they begin the team mission at morning2
twilight, with leaving the rest roosts and often perching on giant3
trees or power poles inside their home realm. They know their4
family members and try to be aware of their moves during the5
attack. When assembled and party gets started, some hawks one6
after the other make short tours and then, land on rather high7
perches. In this manner, the hawks occasionally will perform a8
‘‘leapfrog’’ motion all over the target site and they rejoin and split9
several times to actively search for the covered animal, which is10
usually a rabbit.1
11
The main tactic of Harris’ hawks to capture a prey is ‘‘surprise12
pounce’’, which is also known as ‘‘seven kills’’ strategy. In this in-13
telligent strategy, several hawks try to cooperatively attack from14
different directions and simultaneously converge on a detected15
escaping rabbit outside the cover. The attack may rapidly be16
completed by capturing the surprised prey in few seconds, but17
occasionally, regarding the escaping capabilities and behaviors18
of the prey, the seven kills may include multiple, short-length,19
quick dives nearby the prey during several minutes. Harris’ hawks20
can demonstrate a variety of chasing styles dependent on the21
dynamic nature of circumstances and escaping patterns of a prey.22
A switching tactic occurs when the best hawk (leader) stoops at23
the prey and get lost, and the chase will be continued by one of24
the party members. These switching activities can be observed25
in different situations because they are beneficial for confusing26
the escaping rabbit. The main advantage of these cooperative27
tactics is that the Harris’ hawks can pursue the detected rabbit28
to exhaustion, which increases its vulnerability. Moreover, by29
perplexing the escaping prey, it cannot recover its defensive30
capabilities and finally, it cannot escape from the confronted team31
besiege since one of the hawks, which is often the most powerful32
and experienced one, effortlessly captures the tired rabbit and33
shares it with other party members. Harris’ hawks and their main34
behaviors can be seen in nature, as captured in Fig. 2.35
3. Harris hawks optimization (HHO)36
In this section, we model the exploratory and exploitative37
phases of the proposed HHO inspired by the exploring a prey, sur-38
prise pounce, and different attacking strategies of Harris hawks.39
HHO is a population-based, gradient-free optimization technique;40
hence, it can be applied to any optimization problem subject to41
a proper formulation. Fig. 3 shows all phases of HHO, which are42
described in the next subsections.43
1Interested readers can refer to the following documentary videos: (a)
https://bit.ly/2Qew2qN, (b) https://bit.ly/2qsh8Cl, (c) https://bit.ly/2P7OMvH, (d)
https://bit.ly/2DosJdS.
2These images were obtained from (a) https://bit.ly/2qAsODb (b) https:
//bit.ly/2zBFo9l.
Fig. 3. Different phases of HHO.
3.1. Exploration phase 44
In this part, the exploration mechanism of HHO is proposed. If 45
we consider the nature of Harris’ hawks, they can track and detect 46
the prey by their powerful eyes, but occasionally the prey cannot 47
be seen easily. Hence, the hawks wait, observe, and monitor the 48
desert site to detect a prey maybe after several hours. In HHO, the 49
Harris’ hawks are the candidate solutions and the best candidate 50
solution in each step is considered as the intended prey or nearly 51
the optimum. In HHO, the Harris’ hawks perch randomly on some 52
locations and wait to detect a prey based on two strategies. If 53
we consider an equal chance qfor each perching strategy, they 54
perch based on the positions of other family members (to be close 55
enough to them when attacking) and the rabbit, which is modeled 56
in Eq. (1) for the condition of q<0.5, or perch on random tall 57
trees (random locations inside the group’s home range), which is 58
modeled in Eq. (1) for condition of q0.5. 59
X(t+1) =Xrand(t)r1|Xrand (t)2r2X(t)|q0.5
(Xrabbit (t)Xm(t)) r3(LB +r4(UB LB)) q<0.5
(1) 60
FUTURE: 4781
Please cite this article as: A.A. Heidari, S. Mirjalili, H. Faris et al., Harris hawks optimization: Algorithm and applications, Future Generation Computer Systems (2019),
https://doi.org/10.1016/j.future.2019.02.028.
4A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems xxx (xxxx) xxx
where X(t+1) is the position vector of hawks in the next1
iteration t,Xrabbit (t) is the position of rabbit, X(t) is the current2
position vector of hawks, r1,r2,r3,r4, and qare random numbers3
inside (0,1), which are updated in each iteration, LB and UB show4
the upper and lower bounds of variables, Xrand(t) is a randomly5
selected hawk from the current population, and Xmis the average6
position of the current population of hawks.7
We proposed a simple model to generate random locations8
inside the group’s home range (LB,UB). The first rule generates9
solutions based on a random location and other hawks. In second10
rule of Eq. (1), we have the difference of the location of best so11
far and the average position of the group plus a randomly-scaled12
component based on range of variables, while r3is a scaling13
coefficient to further increase the random nature of rule once14
r4takes close values to 1 and similar distribution patterns may15
occur. In this rule, we add a randomly scaled movement length16
to the LB. Then, we considered a random scaling coefficient for17
the component to provide more diversification trends and explore18
different regions of the feature space. It is possible to construct19
different updating rules, but we utilized the simplest rule, which20
is able to mimic the behaviors of hawks. The average position of21
hawks is attained using Eq. (2):22
Xm(t)=1
N
N
i=1
Xi(t) (2)23
where Xi(t) indicates the location of each hawk in iteration tand24
Ndenotes the total number of hawks. It is possible to obtain the25
average location in different ways, but we utilized the simplest26
rule.27
3.2. Transition from exploration to exploitation28
The HHO algorithm can transfer from exploration to exploita-29
tion and then, change between different exploitative behaviors30
based on the escaping energy of the prey. The energy of a prey31
decreases considerably during the escaping behavior. To model32
this fact, the energy of a prey is modeled as:33
E=2E0(1 t
T) (3)34
where Eindicates the escaping energy of the prey, Tis the maxi-35
mum number of iterations, and E0is the initial state of its energy.36
In HHO, E0randomly changes inside the interval (1, 1) at each37
iteration. When the value of E0decreases from 0 to 1, the rabbit38
is physically flagging, whilst when the value of E0increases from39
0 to 1, it means that the rabbit is strengthening. The dynamic40
escaping energy Ehas a decreasing trend during the iterations.41
When the escaping energy |E| ≥1, the hawks search different42
regions to explore a rabbit location, hence, the HHO performs the43
exploration phase, and when |E|<1, the algorithm try to exploit44
the neighborhood of the solutions during the exploitation steps.45
In short, exploration happens when |E| ≥1, while exploitation46
happens in later steps when |E|<1. The time-dependent behavior47
of Eis also demonstrated in Fig. 4.48
3.3. Exploitation phase49
In this phase, the Harris’ hawks perform the surprise pounce50
(seven kills as called in [37]) by attacking the intended prey51
detected in the previous phase. However, preys often attempt to52
escape from dangerous situations. Hence, different chasing styles53
occur in real situations. According to the escaping behaviors of54
the prey and chasing strategies of the Harris’ hawks, four possible55
strategies are proposed in the HHO to model the attacking stage.56
The preys always try to escape from threatening situations.57
Suppose that ris the chance of a prey in successfully escaping58
Fig. 4. Behavior of Eduring two runs and 500 iterations.
(r<0.5) or not successfully escaping (r0.5) before surprise 59
pounce. Whatever the prey does, the hawks will perform a hard 60
or soft besiege to catch the prey. It means that they will encircle 61
the prey from different directions softly or hard depending on 62
the retained energy of the prey. In real situations, the hawks get 63
closer and closer to the intended prey to increase their chances 64
in cooperatively killing the rabbit by performing the surprise 65
pounce. After several minutes, the escaping prey will lose more 66
and more energy; then, the hawks intensify the besiege process 67
to effortlessly catch the exhausted prey. To model this strategy 68
and enable the HHO to switch between soft and hard besiege 69
processes, the Eparameter is utilized. 70
In this regard, when |E| ≥0.5, the soft besiege happens, and 71
when |E|<0.5, the hard besiege occurs. 72
3.3.1. Soft besiege 73
When r0.5 and |E| ≥ 0.5, the rabbit still has enough 74
energy, and try to escape by some random misleading jumps but 75
finally it cannot. During these attempts, the Harris’ hawks encircle 76
it softly to make the rabbit more exhausted and then perform the 77
surprise pounce. This behavior is modeled by the following rules: 78
79
X(t+1) =X(t)E|JXrabbit (t)X(t)|(4) 80
81
X(t)=Xrabbit (t)X(t) (5) 82
where X(t) is the difference between the position vector of 83
the rabbit and the current location in iteration t,r5is a random 84
number inside (0,1), and J=2(1 r5) represents the random 85
jump strength of the rabbit throughout the escaping procedure. 86
The Jvalue changes randomly in each iteration to simulate the 87
nature of rabbit motions. 88
3.3.2. Hard besiege 89
When r0.5 and |E|<0.5, the prey is so exhausted and it 90
has a low escaping energy. In addition, the Harris’ hawks hardly 91
encircle the intended prey to finally perform the surprise pounce. 92
In this situation, the current positions are updated using Eq. (6):93
X(t+1) =Xrabbit (t)E|X(t)|(6) 94
A simple example of this step with one hawk is depicted in 95
Fig. 5.96
FUTURE: 4781
Please cite this article as: A.A. Heidari, S. Mirjalili, H. Faris et al., Harris hawks optimization: Algorithm and applications, Future Generation Computer Systems (2019),
https://doi.org/10.1016/j.future.2019.02.028.
A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems xxx (xxxx) xxx 5
Fig. 5. Example of overall vectors in the case of hard besiege.
Fig. 6. Example of overall vectors in the case of soft besiege with progressive
rapid dives.
3.3.3. Soft besiege with progressive rapid dives1
When still |E| ≥0.5 but r<0.5, the rabbit has enough energy2
to successfully escape and still a soft besiege is constructed before3
the surprise pounce. This procedure is more intelligent than the4
previous case.5
To mathematically model the escaping patterns of the prey6
and leapfrog movements (as called in [37]), the levy flight (LF)7
concept is utilized in the HHO algorithm. The LF is utilized to8
mimic the real zigzag deceptive motions of preys (particular-9
ity rabbits) during escaping phase and irregular, abrupt, and10
rapid dives of hawks around the escaping prey. Actually, hawks11
perform several team rapid dives around the rabbit and try to12
progressively correct their location and directions with regard to13
the deceptive motions of prey. This mechanism is also supported14
by real observations in other competitive situations in nature.15
It has been confirmed that LF-based activities are the optimal16
searching tactics for foragers/predators in non-destructive for-17
aging conditions [42,43]. In addition, it has been detected the18
LF-based patterns can be detected in the chasing activities of19
animals like monkeys and sharks [4447]. Hence, the LF-based20
motions were utilized within this phase of HHO technique.21
Inspired by real behaviors of hawks, we supposed that they22
can progressively select the best possible dive toward the prey23
when they wish to catch the prey in the competitive situations.24
Therefore, to perform a soft besiege, we supposed that the hawks25
can evaluate (decide) their next move based on the following rule26
in Eq. (7):27
Y=Xrabbit (t)E|JXrabbit (t)X(t)|(7)28
Then, they compare the possible result of such a movement to the29
previous dive to detect that will it be a good dive or not. If it was30
not reasonable (when they see that the prey is performing more31
Fig. 7. Example of overall vectors in the case of hard besiege with progressive
rapid dives in 2D and 3D space.
deceptive motions), they also start to perform irregular, abrupt, 32
and rapid dives when approaching the rabbit. We supposed that 33
they will dive based on the LF-based patterns using the following 34
rule: 35
Z=Y+S×LF(D) (8) 36
where Dis the dimension of problem and Sis a random vector by 37
size 1 ×Dand LF is the levy flight function, which is calculated 38
using Eq. (9) [48]: 39
LF(x)=0.01 ×u×σ
|v|1
β
, σ =Γ(1 +β)×sin(πβ
2)
Γ(1+β
2)×β×2(β1
2)1
β
(9) 40
where u,vare random values inside (0,1), βis a default constant 41
set to 1.5. 42
Hence, the final strategy for updating the positions of hawks 43
in the soft besiege phase can be performed by Eq. (10):44
X(t+1) =Y if F(Y)<F(X(t))
Z if F (Z)<F(X(t)) (10) 45
where Yand Zare obtained using Eqs. (7) and (8).46
A simple illustration of this step for one hawk is demonstrated 47
in Fig. 6. Note that the position history of LF-based leapfrog 48
movement patterns during some iterations are also recorded 49
and shown in this illustration. The colored dots are the location 50
footprints of LF-based patterns in one trial and then, the HHO 51
reaches to the location Z. In each step, only the better position Y52
or Zwill be selected as the next location. This strategy is applied 53
to all search agents. 54
3.3.4. Hard besiege with progressive rapid dives 55
When |E|<0.5 and r<0.5, the rabbit has not enough energy 56
to escape and a hard besiege is constructed before the surprise 57
FUTURE: 4781
Please cite this article as: A.A. Heidari, S. Mirjalili, H. Faris et al., Harris hawks optimization: Algorithm and applications, Future Generation Computer Systems (2019),
https://doi.org/10.1016/j.future.2019.02.028.
6A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems xxx (xxxx) xxx
Fig. 8. Demonstration of composition test functions.
pounce to catch and kill the prey. The situation of this step in the1
prey side is similar to that in the soft besiege, but this time, the2
hawks try to decrease the distance of their average location with3
the escaping prey. Therefore, the following rule is performed in4
hard besiege condition:5
X(t+1) =Y if F(Y)<F(X(t))
Z if F (Z)<F(X(t)) (11)6
where Yand Zare obtained using new rules in Eqs. (12) and (13).7
8
Y=Xrabbit (t)E|JXrabbit (t)Xm(t)|(12)9
10
Z=Y+S×LF(D) (13)11
where Xm(t) is obtained using Eq. (2). A simple example of this12
step is demonstrated in Fig. 7. Note that the colored dots are the13
location footprints of LF-based patterns in one trial and only Yor14
Zwill be the next location for the new iteration.15
3.4. Pseudocode of HHO16
The pseudocode of the proposed HHO algorithm is reported in17
Algorithm 1.18
3.5. Computational complexity19
Note that the computational complexity of the HHO mainly20
depends on three processes: initialization, fitness evaluation, and21
updating of hawks. Note that with Nhawks, the computational22
complexity of the initialization process is O(N). The computa-23
tional complexity of the updating mechanism is O(T×N)+24
O(T×N×D), which is composed of searching for the best25
location and updating the location vector of all hawks, where T26
is the maximum number of iterations and Dis the dimension of27
specific problems. Therefore, computational complexity of HHO28
is O(N×(T+TD +1)).29
4. Experimental results and discussions30
4.1. Benchmark set and compared algorithms31
In order to investigate the efficacy of the proposed HHO op-32
timizer, a well-studied set of diverse benchmark functions are33
selected from literature [49,50]. This benchmark set covers three34
main groups of benchmark landscapes: unimodal (UM), multi-35
modal (MM), and composition (CM). The UM functions (F1–F7)36
Algorithm 1 Pseudo-code of HHO algorithm
Inputs: The population size Nand maximum number of
iterations T
Outputs: The location of rabbit and its fitness value
Initialize the random population Xi(i=1,2,...,N)
while (stopping condition is not met) do
Calculate the fitness values of hawks
Set Xrabbit as the location of rabbit (best location)
for (each hawk (Xi)) do
Update the initial energy E0and jump strength J
E0=2rand()-1, J=2(1-rand())
Update the Eusing Eq. (3)
if (|E|≥ 1) then Exploration phase
Update the location vector using Eq. (1)
if (|E|<1) then Exploitation phase
if (r0.5 and |E|≥ 0.5 ) then Soft besiege
Update the location vector using Eq. (4)
else if (r0.5 and |E|<0.5 ) then Hard besiege
Update the location vector using Eq. (6)
else if (r<0.5 and |E|≥ 0.5 ) then Soft besiege
with progressive rapid dives
Update the location vector using Eq. (10)
else if (r<0.5 and |E|<0.5 ) then Hard besiege
with progressive rapid dives
Update the location vector using Eq. (11)
Return Xrabbit
with unique global best can reveal the exploitative (intensifica- 37
tion) capacities of different optimizers, while the MM functions 38
(F8–F23) can disclose the exploration (diversification) and LO 39
avoidance potentials of algorithms. The mathematical formula- 40
tion and characteristics of UM and MM problems are shown in 41
Tables 16–18 in Appendix A. The third group problems (F24– 42
F29) are selected from IEEE CEC 2005 competition [51] and covers 43
hybrid composite, rotated and shifted MM test cases. These CM 44
cases are also utilized in many papers and can expose the per- 45
formance of utilized optimizers in well balancing the exploration 46
and exploitation inclinations and escaping from LO in dealing 47
with challenging problems. Details of the CM test problems are 48
also reported in Table 19 in Appendix A.Fig. 8 demonstrates three 49
of composition test problems. 50
The results and performance of the proposed HHO is com- 51
pared with other well-established optimization techniques such 52
as the GA [22], BBO [22], DE [22], PSO [22], CS [34], TLBO [29], 53
BA/BAT [52], FPA [53], FA [54], GWO [55], and MFO [56] algo- 54
rithms based on the best, worst, standard deviation (STD) and 55
FUTURE: 4781
Please cite this article as: A.A. Heidari, S. Mirjalili, H. Faris et al., Harris hawks optimization: Algorithm and applications, Future Generation Computer Systems (2019),
https://doi.org/10.1016/j.future.2019.02.028.
A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems xxx (xxxx) xxx 7
Table 1
The parameter settings.
Algorithm Parameter Value
DE Scaling factor 0.5
Crossover probability 0.5
PSO Topology fully connected
Inertia factor 0.3
c11
c21
TLBO Teaching factor T1, 2
GWO Convergence constant a[2 0]
MFO Convergence constant a[21]
Spiral factor b1
CS Discovery rate of alien solutions pa0.25
BA Qmin Frequency minimum 0
Qmax Frequency maximum 2
ALoudness 0.5
rPulse rate 0.5
FA α0.5
β0.2
γ1
FPA Probability switch p0.8
BBO Habitat modification probability 1
Immigration probability limits [0, 1]
Step size 1
Max immigration (I) and Max emigration (E) 1
Mutation probability 0.005
average of the results (AVG). These algorithms cover both recently1
proposed techniques such as MFO, GWO, CS, TLBO, BAT, FPA, and2
FA and also, relatively the most utilized optimizers in the field3
like the GA, DE, PSO, and BBO algorithms.4
As recommended by Derrac et al. [57], the non-parametric5
Wilcoxon statistical test with 5% degree of significance is also6
performed along with experimental assessments to detect the7
significant differences between the attained results of different8
techniques.9
4.2. Experimental setup10
All algorithms were implemented under Matlab 7.10 (R2010a)11
on a computer with a Windows 7 64-bit professional and 64 GB12
RAM. The swarm size and maximum iterations of all optimizers13
are set to 30 and 500, respectively. All results are recorded and14
compared based on the average performance of optimizers over15
30 independent runs.16
The settings of GA, PSO, DE and BBO algorithms are same with17
those set by Dan Simon in the original work of BBO [22], while18
for the BA [52], FA [58], TLBO [29], GWO [55], FPA [53], CS [34],19
and MFO [56], the parameters are same with the recommended20
settings in the original works. The used parameters are also21
reported in Table 1.22
4.3. Qualitative results of HHO23
The qualitative results of HHO for several standard unimodal24
and multimodal test problems are demonstrated in Figs. 911.25
These results include four well-known metrics: search history,26
the trajectory of the first hawk, average fitness of population,27
and convergence behavior. In addition, the escaping energy of28
the rabbit is also monitored during iterations. The search history29
diagram reveals the history of those positions visited by artificial30
hawks during iterations. The map of the trajectory of the first31
hawk monitors how the first variable of the first hawk varies32
during the steps of the process. The average fitness of hawks33
monitors how the average fitness of whole population varies34
during the process of optimization. The convergence metric also35
reveals how the fitness value of the rabbit (best solution) varies36
during the optimization. Note that the diagram of escaping en- 37
ergy demonstrates how the energy of rabbit varies during the 38
simulation. 39
From the history of sampled locations in Figs. 911, it can be 40
observed that the HHO reveals a similar pattern in dealing with 41
different cases, in which the hawks attempts to initially boost the 42
diversification and explore the favorable areas of solution space 43
and then exploit the vicinity of the best locations. The diagram 44
of trajectories can help us to comprehend the searching behavior 45
of the foremost hawk (as a representative of the rest of hawks). 46
By this metric, we can check if the foremost hawk faces abrupt 47
changes during the early phases and gradual variations in the 48
concluding steps. Referring to Van Den Bergh and Engelbrecht 49
[59], these activities can guarantee that a P-metaheuristic finally 50
convergences to a position and exploit the target region. 51
As per trajectories in Figs. 911, we see that the foremost 52
hawk start the searching procedure with sudden movements. The 53
amplitude of these variations covers more than 50% of the solu- 54
tion space. This observation can disclose the exploration propen- 55
sities of the proposed HHO. As times passes, the amplitude of 56
these fluctuations gradually decreases. This point guarantees the 57
transition of HHO from exploratory trends to exploitative steps. 58
Eventually, the motion pattern of the first hawk becomes very 59
stable which shows that the HHO is exploiting the promising 60
regions during the concluding steps. By monitoring the average 61
fitness of the population, the next measure, we can notice the 62
reduction patterns in fitness values when the HHO enriches the 63
excellence of the randomized candidate hawks. Based on the 64
diagrams demonstrated in Figs. 911, the HHO can enhance the 65
quality of all hawks during half of the iterations and there is an 66
accelerating decreasing pattern in all curves. Again, the amplitude 67
of variations of fitness results decreases by more iteration. Hence, 68
the HHO can dynamically focus on more promising areas during 69
iterations. According to convergence curves in Fig. Figs. 911,70
which shows the average fitness of best hawk found so far, we 71
can detect accelerated decreasing patterns in all curves, especially 72
after half of the iteration. We can also detect the estimated 73
moment that the HHO shift from exploration to exploitation. In 74
this regard, it is observed that the HHO can reveal an accelerated 75
convergence trend. 76
4.4. Scalability analysis 77
In this section, a scalability assessment is utilized to investi- 78
gate the impact of dimension on the results of HHO. This test has 79
been utilized in the previous studies and it can reveal the impact 80
of dimensions on the quality of solutions for the HHO optimizer 81
to recognize its efficacy not only for problems with lower dimen- 82
sions but also for higher dimension tasks. In addition, it reveals 83
how a P-metaheuristic can preserve its searching advantages in 84
higher dimensions. For this experiment, the HHO is utilized to 85
tackle the scalable UM and MM F1–F13 test cases with 30, 100, 86
500, and 1000 dimensions. The average error AVG and STD of the 87
attained results of all optimizers over 30 independent runs and 88
500 iterations are recorded and compared for each dimension. 89
Table 2 reveals the results of HHO versus other methods in 90
dealing with F1–F13 problems with different dimensions. The 91
scalability results for all techniques are also illustrated in Fig. 12.92
Note that the detailed results of all techniques are reported in the 93
next parts. 94
As it can be seen in Table 2, the HHO can expose excellent re- 95
sults in all dimensions and its performance remains consistently 96
superior when realizing cases with many variables. As per curves 97
in Fig. 12, it is observed that the optimality of results and the 98
performance of other methods significantly degrade by increasing 99
the dimensions. This reveals that HHO is capable of maintain- 100
ing a good balance between the exploratory and exploitative 101
tendencies on problems with many variables. 102
FUTURE: 4781
Please cite this article as: A.A. Heidari, S. Mirjalili, H. Faris et al., Harris hawks optimization: Algorithm and applications, Future Generation Computer Systems (2019),
https://doi.org/10.1016/j.future.2019.02.028.
8A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems xxx (xxxx) xxx
Fig. 9. Qualitative results for unimodal F1, F3, and F4 problems.
FUTURE: 4781
Please cite this article as: A.A. Heidari, S. Mirjalili, H. Faris et al., Harris hawks optimization: Algorithm and applications, Future Generation Computer Systems (2019),
https://doi.org/10.1016/j.future.2019.02.028.
A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems xxx (xxxx) xxx 9
Fig. 10. Qualitative results for F7, F9, and F10 problems.
FUTURE: 4781
Please cite this article as: A.A. Heidari, S. Mirjalili, H. Faris et al., Harris hawks optimization: Algorithm and applications, Future Generation Computer Systems (2019),
https://doi.org/10.1016/j.future.2019.02.028.
10 A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems xxx (xxxx) xxx
Fig. 11. Qualitative results for F13 problem.
Table 2
Results of HHO for different dimensions of scalable F1–F13 problems.
Problem/D Metric 30 100 500 1000
F1 AVG 3.95E97 1.91E94 1.46E92 1.06E94
STD 1.72E96 8.66E94 8.01E92 4.97E94
F2 AVG 1.56E51 9.98E52 7.87E49 2.52E50
STD 6.98E51 2.66E51 3.11E48 5.02E50
F3 AVG 1.92E63 1.84E59 6.54E37 1.79E17
STD 1.05E62 1.01E58 3.58E36 9.81E17
F4 AVG 1.02E47 8.76E47 1.29E47 1.43E46
STD 5.01E47 4.79E46 4.11E47 7.74E46
F5 AVG 1.32E02 2.36E02 3.10E01 5.73E01
STD 1.87E02 2.99E02 3.73E01 1.40E+00
F6 AVG 1.15E04 5.12E04 2.94E03 3.61E03
STD 1.56E04 6.77E04 3.98E03 5.38E03
F7 AVG 1.40E04 1.85E04 2.51E04 1.41E04
STD 1.07E04 4.06E04 2.43E04 1.63E04
F8 AVG 1.25E+04 4.19E+04 2.09E+05 4.19E+05
STD 1.47E+02 2.82E+00 2.84E+01 1.03E+02
F9 AVG 0.00E+00 0.00E+00 0.00E+00 0.00E+00
STD 0.00E+00 0.00E+00 0.00E+00 0.00E+00
F10 AVG 8.88E16 8.88E16 8.88E16 8.88E16
STD 4.01E31 4.01E31 4.01E31 4.01E31
F11 AVG 0.00E+00 0.00E+00 0.00E+00 0.00E+00
STD 0.00E+00 0.00E+00 0.00E+00 0.00E+00
F12 AVG 7.35E06 4.23E06 1.41E06 1.02E06
STD 1.19E05 5.25E06 1.48E06 1.16E06
F13 AVG 1.57E04 9.13E05 3.44E04 8.41E04
STD 2.15E04 1.26E04 4.75E04 1.18E03
4.5. Quantitative results of HHO and discussion1
In this section, the results of HHO are compared with those of2
other optimizers for different dimensions of F1–F13 test problems3
in addition to the F14–F29 MM and CM test cases. Note that4
the results are presented for 30, 100, 500, and 1000 dimensions5
of the scalable F1–F13 problems. Tables 36show the obtained6
results for HHO versus other competitors in dealing with scalable7
functions. Table 8 also reveals the performance of algorithms8
in dealing with F14–F29 test problems. In order to investigate9
the significant differences between the results of proposed HHO10
versus other optimizers, Wilcoxon rank-sum test with 5% degree 11
is carefully performed here [57]. Tables 20–24 in Appendix B 12
show the attained p-values of the Wilcoxon rank-sum test with 13
5% significance. 14
As per result in Table 3, the HHO can obtain the best re- 15
sults compared to other competitors on F1–F5, F7, and F9–F13 16
problems. The results of HHO are considerably better than other 17
algorithms in dealing with 84.6% of these 30-dimensional func- 18
tions, demonstrating the superior performance of this optimizer. 19
According to p-values in Table 20, it is detected that the observed 20
differences in the results are statistically meaningful for all cases. 21
FUTURE: 4781
Please cite this article as: A.A. Heidari, S. Mirjalili, H. Faris et al., Harris hawks optimization: Algorithm and applications, Future Generation Computer Systems (2019),
https://doi.org/10.1016/j.future.2019.02.028.
A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems xxx (xxxx) xxx 11
Table 3
Results of benchmark functions (F1–F13), with 30 dimensions.
Benchmark HHO GA PSO BBO FPA GWO BAT FA CS MFO TLBO DE
F1 AVG 3.95E97 1.03E+03 1.83E+04 7.59E+01 2.01E+03 1.18E27 6.59E+04 7.11E03 9.06E04 1.01E+03 2.17E89 1.33E03
STD 1.72E96 5.79E+02 3.01E+03 2.75E+01 5.60E+02 1.47E27 7.51E+03 3.21E03 4.55E04 3.05E+03 3.14E89 5.92E04
F2 AVG 1.56E51 2.47E+01 3.58E+02 1.36E03 3.22E+01 9.71E17 2.71E+08 4.34E01 1.49E01 3.19E+01 2.77E45 6.83E03
STD 6.98E51 5.68E+00 1.35E+03 7.45E03 5.55E+00 5.60E17 1.30E+09 1.84E01 2.79E02 2.06E+01 3.11E45 2.06E03
F3 AVG 1.92E63 2.65E+04 4.05E+04 1.21E+04 1.41E+03 5.12E05 1.38E+05 1.66E+03 2.10E01 2.43E+04 3.91E18 3.97E+04
STD 1.05E62 3.44E+03 8.21E+03 2.69E+03 5.59E+02 2.03E04 4.72E+04 6.72E+02 5.69E02 1.41E+04 8.04E18 5.37E+03
F4 AVG 1.02E47 5.17E+01 4.39E+01 3.02E+01 2.38E+01 1.24E06 8.51E+01 1.11E01 9.65E02 7.00E+01 1.68E36 1.15E+01
STD 5.01E47 1.05E+01 3.64E+00 4.39E+00 2.77E+00 1.94E06 2.95E+00 4.75E02 1.94E02 7.06E+00 1.47E36 2.37E+00
F5 AVG 1.32E02 1.95E+04 1.96E+07 1.82E+03 3.17E+05 2.70E+01 2.10E+08 7.97E+01 2.76E+01 7.35E+03 2.54E+01 1.06E+02
STD 1.87E02 1.31E+04 6.25E+06 9.40E+02 1.75E+05 7.78E01 4.17E+07 7.39E+01 4.51E01 2.26E+04 4.26E01 1.01E+02
F6 AVG 1.15E04 9.01E+02 1.87E+04 6.71E+01 1.70E+03 8.44E01 6.69E+04 6.94E03 3.13E03 2.68E+03 3.29E05 1.44E03
STD 1.56E04 2.84E+02 2.92E+03 2.20E+01 3.13E+02 3.18E01 5.87E+03 3.61E03 1.30E03 5.84E+03 8.65E05 5.38E04
F7 AVG 1.40E04 1.91E01 1.07E+01 2.91E03 3.41E01 1.70E03 4.57E+01 6.62E02 7.29E02 4.50E+00 1.16E03 5.24E02
STD 1.07E04 1.50E01 3.05E+00 1.83E03 1.10E01 1.06E03 7.82E+00 4.23E02 2.21E02 9.21E+00 3.63E04 1.37E02
F8 AVG 1.25E+04 1.26E+04 3.86E+03 1.24E+04 6.45E+03 5.97E+03 2.33E+03 5.85E+03 5.19E+19 8.48E+03 7.76E+03 6.82E+03
STD 1.47E+02 4.51E+00 2.49E+02 3.50E+01 3.03E+02 7.10E+02 2.96E+02 1.16E+03 1.76E+20 7.98E+02 1.04E+03 3.94E+02
F9 AVG 0.00E+00 9.04E+00 2.87E+02 0.00E+00 1.82E+02 2.19E+00 1.92E+02 3.82E+01 1.51E+01 1.59E+02 1.40E+01 1.58E+02
STD 0.00E+00 4.58E+00 1.95E+01 0.00E+00 1.24E+01 3.69E+00 3.56E+01 1.12E+01 1.25E+00 3.21E+01 5.45E+00 1.17E+01
F10 AVG 8.88E16 1.36E+01 1.75E+01 2.13E+00 7.14E+00 1.03E13 1.92E+01 4.58E02 3.29E02 1.74E+01 6.45E15 1.21E02
STD 4.01E31 1.51E+00 3.67E01 3.53E01 1.08E+00 1.70E14 2.43E01 1.20E02 7.93E03 4.95E+00 1.79E15 3.30E03
F11 AVG 0.00E+00 1.01E+01 1.70E+02 1.46E+00 1.73E+01 4.76E03 6.01E+02 4.23E03 4.29E05 3.10E+01 0.00E+00 3.52E02
STD 0.00E+00 2.43E+00 3.17E+01 1.69E01 3.63E+00 8.57E03 5.50E+01 1.29E03 2.00E05 5.94E+01 0.00E+00 7.20E02
F12 AVG 2.08E06 4.77E+00 1.51E+07 6.68E01 3.05E+02 4.83E02 4.71E+08 3.13E04 5.57E05 2.46E+02 7.35E06 2.25E03
STD 1.19E05 1.56E+00 9.88E+06 2.62E01 1.04E+03 2.12E02 1.54E+08 1.76E04 4.96E05 1.21E+03 7.45E06 1.70E03
F13 AVG 1.57E04 1.52E+01 5.73E+07 1.82E+00 9.59E+04 5.96E01 9.40E+08 2.08E03 8.19E03 2.73E+07 7.89E02 9.12E03
STD 2.15E04 4.52E+00 2.68E+07 3.41E01 1.46E+05 2.23E01 1.67E+08 9.62E04 6.74E03 1.04E+08 8.78E02 1.16E02
Table 4
Results of benchmark functions (F1–F13), with 100 dimensions.
Benchmark HHO GA PSO BBO FPA GWO BAT FA CS MFO TLBO DE
F1 AVG 1.91E94 5.41E+04 1.06E+05 2.85E+03 1.39E+04 1.59E12 2.72E+05 3.05E01 3.17E01 6.20E+04 3.62E81 8.26E+03
STD 8.66E94 1.42E+04 8.47E+03 4.49E+02 2.71E+03 1.63E12 1.42E+04 5.60E02 5.28E02 1.25E+04 4.14E81 1.32E+03
F2 AVG 9.98E52 2.53E+02 6.06E+23 1.59E+01 1.01E+02 4.31E08 6.00E+43 1.45E+01 4.05E+00 2.46E+02 3.27E41 1.21E+02
STD 2.66E51 1.41E+01 2.18E+24 3.74E+00 9.36E+00 1.46E08 1.18E+44 6.73E+00 3.16E01 4.48E+01 2.75E41 2.33E+01
F3 AVG 1.84E59 2.53E+05 4.22E+05 1.70E+05 1.89E+04 4.09E+02 1.43E+06 4.65E+04 6.88E+00 2.15E+05 4.33E07 5.01E+05
STD 1.01E58 5.03E+04 7.08E+04 2.02E+04 5.44E+03 2.77E+02 6.21E+05 6.92E+03 1.02E+00 4.43E+04 8.20E07 5.87E+04
F4 AVG 8.76E47 8.19E+01 6.07E+01 7.08E+01 3.51E+01 8.89E01 9.41E+01 1.91E+01 2.58E01 9.31E+01 6.36E33 9.62E+01
STD 4.79E46 3.15E+00 3.05E+00 4.73E+00 3.37E+00 9.30E01 1.49E+00 3.12E+00 2.80E02 2.13E+00 6.66E33 1.00E+00
F5 AVG 2.36E02 2.37E+07 2.42E+08 4.47E+05 4.64E+06 9.79E+01 1.10E+09 8.46E+02 1.33E+02 1.44E+08 9.67E+01 1.99E+07
STD 2.99E02 8.43E+06 4.02E+07 2.05E+05 1.98E+06 6.75E01 9.47E+07 8.13E+02 7.34E+00 7.50E+07 7.77E01 5.80E+06
F6 AVG 5.12E04 5.42E+04 1.07E+05 2.85E+03 1.26E+04 1.03E+01 2.69E+05 2.95E01 2.65E+00 6.68E+04 3.27E+00 8.07E+03
STD 6.77E04 1.09E+04 9.70E+03 4.07E+02 2.06E+03 1.05E+00 1.25E+04 5.34E02 3.94E01 1.46E+04 6.98E01 1.64E+03
F7 AVG 1.85E04 2.73E+01 3.41E+02 1.25E+00 5.84E+00 7.60E03 3.01E+02 5.65E01 1.21E+00 2.56E+02 1.50E03 1.96E+01
STD 4.06E04 4.45E+01 8.74E+01 5.18E+00 2.16E+00 2.66E03 2.66E+01 1.64E01 2.65E01 8.91E+01 5.39E04 5.66E+00
F8 AVG 4.19E+04 4.10E+04 7.33E+03 3.85E+04 1.28E+04 1.67E+04 4.07E+03 1.81E+04 2.84E+18 2.30E+04 1.71E+04 1.19E+04
STD 2.82E+00 1.14E+02 4.75E+02 2.80E+02 4.64E+02 2.62E+03 9.37E+02 3.23E+03 6.91E+18 1.98E+03 3.54E+03 5.80E+02
F9 AVG 0.00E+00 3.39E+02 1.16E+03 9.11E+00 8.47E+02 1.03E+01 7.97E+02 2.36E+02 1.72E+02 8.65E+02 1.02E+01 1.03E+03
STD 0.00E+00 4.17E+01 5.74E+01 2.73E+00 4.01E+01 9.02E+00 6.33E+01 2.63E+01 9.24E+00 8.01E+01 5.57E+01 4.03E+01
F10 AVG 8.88E16 1.82E+01 1.91E+01 5.57E+00 8.21E+00 1.20E07 1.94E+01 9.81E01 3.88E01 1.99E+01 1.66E02 1.22E+01
STD 4.01E31 4.35E01 2.04E01 4.72E01 1.14E+00 5.07E08 6.50E02 2.55E01 5.23E02 8.58E02 9.10E02 8.31E01
F11 AVG 0.00E+00 5.14E+02 9.49E+02 2.24E+01 1.19E+02 4.87E03 2.47E+03 1.19E01 4.56E03 5.60E+02 0.00E+00 7.42E+01
STD 0.00E+00 1.05E+02 6.00E+01 4.35E+00 2.00E+01 1.07E02 1.03E+02 2.34E02 9.73E04 1.23E+02 0.00E+00 1.40E+01
F12 AVG 4.23E06 4.55E+06 3.54E+08 3.03E+02 1.55E+05 2.87E01 2.64E+09 4.45E+00 2.47E02 2.82E+08 3.03E02 3.90E+07
STD 5.25E06 8.22E+06 8.75E+07 1.48E+03 1.74E+05 6.41E02 2.69E+08 1.32E+00 5.98E03 1.45E+08 1.02E02 1.88E+07
F13 AVG 9.13E05 5.26E+07 8.56E+08 6.82E+04 2.76E+06 6.87E+00 5.01E+09 4.50E+01 5.84E+00 6.68E+08 5.47E+00 7.19E+07
STD 1.26E04 3.76E+07 2.16E+08 3.64E+04 1.80E+06 3.32E01 3.93E+08 2.24E+01 1.21E+00 3.05E+08 8.34E01 2.73E+07
From Table 4, when we have a 100-dimensional search space, the1
HHO can considerably outperform other techniques and attain2
the best results for 92.3% of F1–F13 problems. It is observed3
that the results of HHO are again remarkably better than other4
techniques. With regard to p-values in Table 21, it is detected that5
the solutions of HHO are significantly better than those realized6
by other techniques in almost all cases. From Table 5, we see that7
the HHO can attain the best results in terms of AVG and STD in8
dealing with 12 test cases with 500 dimensions. By considering p-9
values in Table 22, it is recognized that the HHO can significantly10
outperform other optimizers in all cases. As per results in Table 6,11
similarly to what we observed in lower dimensions, it is detected12
that the HHO has still a remarkably superior performance in13
dealing with F1–F13 test functions compared to GA, PSO, DE, BBO,14
CS, GWO, MFO, TLBO, BAT, FA, and FPA optimizers. The statistical15
results in Table 23 also verify the significant gap between the16
results of HHO and other optimizers in almost all cases. It is seen17
that the HHO has reached the best global optimum for F9 and F1118
cases in any dimension.19
In order to further check the efficacy of HHO, we recorded20
the running time taken by optimizers to find the solutions for21
F1–F13 problems with 1000 dimensions and the results are ex- 22
posed in Table 7. As per results in Table 7, we detect that the 23
HHO shows a reasonably fast and competitive performance in 24
finding the best solutions compared to other well-established 25
optimizers even for high dimensional unimodal and multimodal 26
cases. Based on average running time on 13 problems, the HHO 27
performs faster than BBO, PSO, GA, CS, GWO, and FA algorithms. 28
These observations are also in accordance with the computational 29
complexity of HHO. 30
The results in Table 8 verify that HHO provides superior and 31
very competitive results on F14–F23 fixed dimension MM test 32
cases. The results on F16–F18 are very competitive and all al- 33
gorithms have attained high-quality results. Based on results in 34
Table 8, the proposed HHO has always achieved to the best re- 35
sults on F14–F23 problems in comparison with other approaches. 36
Based on results for F24–F29 hybrid CM functions in Table 8,37
the HHO is capable of achieving to high-quality solutions and 38
outperforming other competitors. The p-values in Table 24 also 39
confirm the meaningful advantage of HHO compared to other 40
optimizers for the majority of cases. 41
FUTURE: 4781
Please cite this article as: A.A. Heidari, S. Mirjalili, H. Faris et al., Harris hawks optimization: Algorithm and applications, Future Generation Computer Systems (2019),
https://doi.org/10.1016/j.future.2019.02.028.
12 A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems xxx (xxxx) xxx
Table 5
Results of benchmark functions (F1–F13), with 500 dimensions.
Benchmark HHO GA PSO BBO FPA GWO BAT FA CS MFO TLBO DE
F1 AVG 1.46E92 6.06E+05 6.42E+05 1.60E+05 8.26E+04 1.42E03 1.52E+06 6.30E+04 6.80E+00 1.15E+06 2.14E77 7.43E+05
STD 8.01E92 7.01E+04 2.96E+04 9.76E+03 1.32E+04 3.99E04 3.58E+04 8.47E+03 4.93E01 3.54E+04 1.94E77 3.67E+04
F2 AVG 7.87E49 1.94E+03 6.08E+09 5.95E+02 5.13E+02 1.10E02 8.34E+09 7.13E+02 4.57E+01 3.00E+08 2.31E39 3.57E+09
STD 3.11E48 7.03E+01 1.70E+10 1.70E+01 4.84E+01 1.93E03 1.70E+10 3.76E+01 2.05E+00 1.58E+09 1.63E39 1.70E+10
F3 AVG 6.54E37 5.79E+06 1.13E+07 2.98E+06 5.34E+05 3.34E+05 3.37E+07 1.19E+06 2.03E+02 4.90E+06 1.06E+00 1.20E+07
STD 3.58E36 9.08E+05 1.43E+06 3.87E+05 1.34E+05 7.95E+04 1.41E+07 1.88E+05 2.72E+01 1.02E+06 3.70E+00 1.49E+06
F4 AVG 1.29E47 9.59E+01 8.18E+01 9.35E+01 4.52E+01 6.51E+01 9.82E+01 5.00E+01 4.06E01 9.88E+01 4.02E31 9.92E+01
STD 4.11E47 1.20E+00 1.49E+00 9.05E01 4.28E+00 5.72E+00 3.32E01 1.73E+00 3.03E02 4.15E01 2.67E31 2.33E01
F5 AVG 3.10E01 1.79E+09 1.84E+09 2.07E+08 3.30E+07 4.98E+02 6.94E+09 2.56E+07 1.21E+03 5.01E+09 4.97E+02 4.57E+09
STD 3.73E01 4.11E+08 1.11E+08 2.08E+07 8.76E+06 5.23E01 2.23E+08 6.14E+06 7.04E+01 2.50E+08 3.07E01 1.25E+09
F6 AVG 2.94E03 6.27E+05 6.57E+05 1.68E+05 8.01E+04 9.22E+01 1.53E+06 6.30E+04 8.27E+01 1.16E+06 7.82E+01 7.23E+05
STD 3.98E03 7.43E+04 3.29E+04 8.23E+03 9.32E+03 2.15E+00 3.37E+04 8.91E+03 2.24E+00 3.48E+04 2.50E+00 3.28E+04
F7 AVG 2.51E04 9.10E+03 1.43E+04 2.62E+03 2.53E+02 4.67E02 2.23E+04 3.71E+02 8.05E+01 3.84E+04 1.71E03 2.39E+04
STD 2.43E04 2.20E+03 1.51E+03 3.59E+02 6.28E+01 1.12E02 1.15E+03 6.74E+01 1.37E+01 2.24E+03 4.80E04 2.72E+03
F8 AVG 2.09E+05 1.31E+05 1.65E+04 1.42E+05 3.00E+04 5.70E+04 9.03E+03 7.27E+04 2.10E+17 6.29E+04 5.02E+04 2.67E+04
STD 2.84E+01 2.31E+04 9.99E+02 1.98E+03 1.14E+03 3.12E+03 2.12E+03 1.15E+04 1.14E+18 5.71E+03 1.00E+04 1.38E+03
F9 AVG 0.00E+00 3.29E+03 6.63E+03 7.86E+02 4.96E+03 7.84E+01 6.18E+03 2.80E+03 2.54E+03 6.96E+03 0.00E+00 7.14E+03
STD 0.00E+00 1.96E+02 1.07E+02 3.42E+01 7.64E+01 3.13E+01 1.20E+02 1.42E+02 5.21E+01 1.48E+02 0.00E+00 1.05E+02
F10 AVG 8.88E16 1.96E+01 1.97E+01 1.44E+01 8.55E+00 1.93E03 2.04E+01 1.24E+01 1.07E+00 2.03E+01 7.62E01 2.06E+01
STD 4.01E31 2.04E01 1.04E01 2.22E01 8.66E01 3.50E04 3.25E02 4.46E01 6.01E02 1.48E01 2.33E+00 2.45E01
F11 AVG 0.00E+00 5.42E+03 5.94E+03 1.47E+03 6.88E+02 1.55E02 1.38E+04 5.83E+02 2.66E02 1.03E+04 0.00E+00 6.75E+03
STD 0.00E+00 7.32E+02 3.19E+02 8.10E+01 8.17E+01 3.50E02 3.19E+02 7.33E+01 2.30E03 4.43E+02 0.00E+00 2.97E+02
F12 AVG 1.41E06 2.79E+09 3.51E+09 1.60E+08 4.50E+06 7.42E01 1.70E+10 8.67E+05 3.87E01 1.20E+10 4.61E01 1.60E+10
STD 1.48E06 1.11E+09 4.16E+08 3.16E+07 3.37E+06 4.38E02 6.29E+08 6.23E+05 2.47E02 6.82E+08 2.40E02 2.34E+09
F13 AVG 3.44E04 8.84E+09 6.82E+09 5.13E+08 3.94E+07 5.06E+01 3.17E+10 2.29E+07 6.00E+01 2.23E+10 4.98E+01 2.42E+10
STD 4.75E04 2.00E+09 8.45E+08 6.59E+07 1.87E+07 1.30E+00 9.68E+08 9.46E+06 1.13E+00 1.13E+09 9.97E03 6.39E+09
Table 6
Results of benchmark functions (F1–F13), with 1000 dimensions.
Benchmark HHO GA PSO BBO FPA GWO BAT FA CS MFO TLBO DE
F1 AVG 1.06E94 1.36E+06 1.36E+06 6.51E+05 1.70E+05 2.42E01 3.12E+06 3.20E+05 1.65E+01 2.73E+06 2.73E76 2.16E+06
STD 4.97E94 1.79E+05 6.33E+04 2.37E+04 2.99E+04 4.72E02 4.61E+04 2.11E+04 1.27E+00 4.70E+04 7.67E76 3.39E+05
F2 AVG 2.52E50 4.29E+03 1.79E+10 1.96E+03 8.34E+02 7.11E01 1.79E+10 1.79E+10 1.02E+02 1.79E+10 1.79E+10 1.79E+10
STD 5.02E50 8.86E+01 1.79E+10 2.18E+01 8.96E+01 4.96E01 1.79E+10 1.79E+10 3.49E+00 1.79E+10 1.79E+10 1.79E+10
F3 AVG 1.79E17 2.29E+07 3.72E+07 9.92E+06 1.95E+06 1.49E+06 1.35E+08 4.95E+06 8.67E+02 1.94E+07 8.61E01 5.03E+07
STD 9.81E17 3.93E+06 1.16E+07 1.48E+06 4.20E+05 2.43E+05 4.76E+07 7.19E+05 1.10E+02 3.69E+06 1.33E+00 4.14E+06
F4 AVG 1.43E46 9.79E+01 8.92E+01 9.73E+01 5.03E+01 7.94E+01 9.89E+01 6.06E+01 4.44E01 9.96E+01 1.01E30 9.95E+01
STD 7.74E46 7.16E01 2.39E+00 7.62E01 5.37E+00 2.77E+00 2.22E01 2.69E+00 2.24E02 1.49E01 5.25E31 1.43E01
F5 AVG 5.73E01 4.73E+09 3.72E+09 1.29E+09 7.27E+07 1.06E+03 1.45E+10 2.47E+08 2.68E+03 1.25E+10 9.97E+02 1.49E+10
STD 1.40E+00 9.63E+08 2.76E+08 6.36E+07 1.84E+07 3.07E+01 3.20E+08 3.24E+07 1.27E+02 3.15E+08 2.01E01 3.06E+08
F6 AVG 3.61E03 1.52E+06 1.38E+06 6.31E+05 1.60E+05 2.03E+02 3.11E+06 3.18E+05 2.07E+02 2.73E+06 1.93E+02 2.04E+06
STD 5.38E03 1.88E+05 6.05E+04 1.82E+04 1.86E+04 2.45E+00 6.29E+04 2.47E+04 4.12E+00 4.56E+04 2.35E+00 2.46E+05
F7 AVG 1.41E04 4.45E+04 6.26E+04 3.84E+04 1.09E+03 1.47E01 1.25E+05 4.44E+03 4.10E+02 1.96E+05 1.83E03 2.27E+05
STD 1.63E04 8.40E+03 4.16E+03 2.91E+03 3.49E+02 3.28E02 3.93E+03 4.00E+02 8.22E+01 6.19E+03 5.79E04 3.52E+04
F8 AVG 4.19E+05 1.94E+05 2.30E+04 2.29E+05 4.25E+04 8.64E+04 1.48E+04 1.08E+05 9.34E+14 9.00E+04 6.44E+04 3.72E+04
STD 1.03E+02 9.74E+03 1.70E+03 3.76E+03 1.47E+03 1.91E+04 3.14E+03 1.69E+04 2.12E+15 7.20E+03 1.92E+04 1.23E+03
F9 AVG 0.00E+00 8.02E+03 1.35E+04 2.86E+03 1.01E+04 2.06E+02 1.40E+04 7.17E+03 6.05E+03 1.56E+04 0.00E+00 1.50E+04
STD 0.00E+00 3.01E+02 1.83E+02 9.03E+01 1.57E+02 4.81E+01 1.85E+02 1.88E+02 1.41E+02 1.94E+02 0.00E+00 1.79E+02
F10 AVG 8.88E16 1.95E+01 1.98E+01 1.67E+01 8.62E+00 1.88E02 2.07E+01 1.55E+01 1.18E+00 2.04E+01 5.09E01 2.07E+01
STD 4.01E31 2.55E01 1.24E01 8.63E02 9.10E01 2.74E03 2.23E02 2.42E01 5.90E02 2.16E01 1.94E+00 1.06E01
F11 AVG 0.00E+00 1.26E+04 1.23E+04 5.75E+03 1.52E+03 6.58E02 2.83E+04 2.87E+03 3.92E02 2.47E+04 1.07E16 1.85E+04
STD 0.00E+00 1.63E+03 5.18E+02 1.78E+02 2.66E+02 8.82E02 4.21E+02 1.78E+02 3.58E03 4.51E+02 2.03E17 2.22E+03
F12 AVG 1.02E06 1.14E+10 7.73E+09 1.56E+09 8.11E+06 1.15E+00 3.63E+10 6.76E+07 6.53E01 3.04E+10 6.94E01 3.72E+10
STD 1.16E06 1.27E+09 6.72E+08 1.46E+08 3.46E+06 1.82E01 1.11E+09 1.80E+07 2.45E02 9.72E+08 1.90E02 7.67E+08
F13 AVG 8.41E04 1.91E+10 1.58E+10 4.17E+09 8.96E+07 1.21E+02 6.61E+10 4.42E+08 1.32E+02 5.62E+10 9.98E+01 6.66E+10
STD 1.18E03 4.21E+09 1.56E+09 2.54E+08 3.65E+07 1.11E+01 1.40E+09 7.91E+07 1.48E+00 1.76E+09 1.31E02 2.26E+09
4.6. Engineering benchmark sets1
In this section, the proposed HHO is applied to six well-known2
benchmark engineering problems. Tackling engineering design3
tasks using P-metaheuristics is a well-regarded research direction4
in the previous works [60,61]. The results of HHO is compared5
to various conventional and modified optimizers proposed in6
previous studies. Table 9 tabulates the details of the tackled7
engineering design tasks.8
4.6.1. Three-bar truss design problem9
This problem can be regarded as one of the most studied
cases in previous works [62]. This problem can be described
mathematically as follows:
Consider
X= [x1x2]=[A1A2],
Minimize f(
X)=22X1+X2×1,
Subject to g1(
X)=2x1+x2
2x2
1+2x1x2
Pσ0,
g2(
X)=x2
2x2
1+2x1x2
Pσ0,
g3(
X)=1
2x2+x1
Pσ0,
Variable range 0 x1,x21,
where 1 =100 cm,P=2 KN/cm2, σ =2 KN/cm2
Fig. 13 demonstrates the shape of the formulated truss and the 10
related forces on this structure. With regard to Fig. 13 and the 11
formulation, we have two parameters: the area of bars 1 and 3 12
and area of bar 2. The objective of this task is to minimize the 13
total weight of the structure. In addition, this design case has 14
several constraints including stress, deflection, and buckling. 15
The HHO is applied to this case based on 30 independent runs 16
with 30 hawks and 500 iterations in each run. Since this bench- 17
mark case has some constraints, we need to integrate the HHO 18
with a constraint handling technique. For the sake of simplicity, 19
we used a barrier penalty approach [82] in the HHO. The results 20
of HHO are compared to those reported for DEDS [63], MVO [64], 21
GOA [62], MFO [56], PSO–DE [65], SSA [60], MBA [66], Tsa [67], 22
FUTURE: 4781
Please cite this article as: A.A. Heidari, S. Mirjalili, H. Faris et al., Harris hawks optimization: Algorithm and applications, Future Generation Computer Systems (2019),
https://doi.org/10.1016/j.future.2019.02.028.
A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems xxx (xxxx) xxx 13
Fig. 12. Scalability results of the HHO versus other methods in dealing with the F1–F13 cases with different dimensions.
Ray and Sain [68], and CS [34] in previous literature. Table 101
shows the detailed results of the proposed HHO compared to2
other techniques. Based on the results in Table 10, it is observed 3
that HHO can reveal very competitive results compared to DEDS, 4
FUTURE: 4781
Please cite this article as: A.A. Heidari, S. Mirjalili, H. Faris et al., Harris hawks optimization: Algorithm and applications, Future Generation Computer Systems (2019),
https://doi.org/10.1016/j.future.2019.02.028.
14 A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems xxx (xxxx) xxx
Table 7
Comparison of average running time results (seconds) over 30 runs for larger-scale problems with 1000 variables.
ID Metric HHO GA PSO BBO FPA GWO BAT FA CS MFO TLBO DE
F1 AVG 2.03E+00 8.27E+01 8.29E+01 1.17E+02 2.13E+00 4.47E+00 1.60E+00 5.62E+00 5.47E+00 3.23E+00 2.21E+00 2.38E+00
STD 4.04E01 5.13E+00 4.04E+00 6.04E+00 2.62E01 2.64E01 2.08E01 4.42E01 4.00E01 2.06E01 3.62E01 2.70E01
F2 AVG 1.70E+00 8.41E+01 8.28E+01 1.16E+02 2.09E+00 4.37E+00 1.61E+00 2.57E+00 5.50E+00 3.25E+00 1.99E+00 2.28E+00
STD 7.37E02 4.65E