Content uploaded by Lajan Jalil Mohammed
Author content
All content in this area was uploaded by Lajan Jalil Mohammed on Feb 01, 2025
Content may be subject to copyright.
Vol.30; No.4.| 2022
Page | 103
info@journalofbabylon.com | jub@itnet.uobabylon.edu.iq | www.journalofbabylon.com ISSN: 2312-8135 | Print ISSN: 1992-0652
An Accelerated Three Term Efficient Algorithm for
Numerical Optimization
Lajan J. Mohammed1* and Ivan S. Latif 2
1College of Science, University of Sulaimani, lajan.mohammed@univsul.edu.iq, Sulaimani 64001, Iraq.
2College of Education, University of Salahaddin, ivan.latif@edu.su.krd, Erbil, Iraq.
*Corresponding author email: lajan.jalel85@gmail.com or lajan_jalil@ymail.co.uk; mobile: 07504306398
lajan.mohammed@univsul.edu.iq
ivan.latif@edu.su.krd
Received:
24 /11/2021
Accepted:
22/12 /2022
Published:
31/12/2022
ABSTRACT
Background:
A new optimization algorithm is presented. The method is biased with new non monotone line
search an accelerated three term conjugate gradient method damped of Quasi Newton method
compared to previews method design efficiency in provident of more than one factor for different
optimization problem are more dramatic due to the ability of the technique to utile existing data.
Materials and Methods:
New monotone line search, new monotone line search, new modification of Damped Quasi-
Newton method, Motivation and New Quasi-Newton Algorithm (MQ) and Global convergence.
Results:
In this work, we have n tendency to compare our new algorithm with same classical strategies
like [7] by exploiting of unconstrained nonlinear optimization problem the functions obtained
from Andrei [5, 6] Waziri and Sabiu (2015)[10] and La couzetul (2004)[3]. The numerical
experiments demonstrate the performance of the proposed method. We selected seven relatively
unconstrained problems with the size varies from 10 to 100. We consider the three sizes of each
problem so that the total number of problem is 21 test problems. We stop the iteration when
is satisfied All codes were written in Matlab R2017a and run on a pc with Intel
COREi4 with a processor with 4GB of Ram and CPU 2.3GHZ we solved test problems using two
different initial starting points.
Conclusion:
In this research article, a project on an accelerated three-term efficient algorithm for numerical
optimization has presented the method as completely a derivative-free algorithm with less NOI
and NOF and CPU time computed to the existing methods .using classical assumption the global
convergence was also proved. Numerical results using the three terms efficient algorithm show
that the algorithm is promising.
Keyword:
Global convergence, non-monotone line search, Three-term conjugate gradient method.
Vol.30; No.4.| 2022
Page | 104
info@journalofbabylon.com | jub@itnet.uobabylon.edu.iq | www.journalofbabylon.com ISSN: 2312-8135 | Print ISSN: 1992-0652
Damped Quasi-Newton
MQ)
Andrei [5] Waziri and Sabiu (2015) [10]
La couzetul (2004) [3].
g_k ‖ ≤ 〖10 ^ (- 6) Matlab R2017a
Intel COREi4
NOINOFCPU
Vol.30; No.4.| 2022
Page | 105
info@journalofbabylon.com | jub@itnet.uobabylon.edu.iq | www.journalofbabylon.com ISSN: 2312-8135 | Print ISSN: 1992-0652
INTRODUCTION
This paper is concerned with damped Quasi-Newton methods for finding a local
minimum of the unconstraint optimization problem [4, 7 and 17].
(1)
Under line search algorithms with the basic iteration
(2)
Where is given here, denoted the step length and is the search direction defined by
(3)
(4)
Wheredenoted the gradient of the matrix and approximate the
Hessian of at and respectively. Since the matrix is used to initiate a Quasi
Newton update at the end of the first iteration, certain new information may be used to
define this matrix not necessarily equal to the user-supplied , though usually at
each iteration, a new Hessan approximation is calculated updating using a
damped of Quasi-Newton method.
It is well known that sufficient descent condition, conjugate condition, and
minimizing condition number are important factors to accelerate iteration. [16-18]
accelerate the iteration and eliminated the round off error a dynamical compensation in
our proposed dumped of Quasi-Newton method is suggested that is satisfied as much as
possible. which aim to consider accelerating the iteration when the second derivation
approximation of the objective function is not as satisfying the damped Quasi Newton
equation proposed, these methods are mostly used when the second derivative matrix of
the objective function is either unavailable or too costly to compute, there are very similar
to newton’s method avoid the need to computing Hessian matrices by recurring from
iteration to iteration [15].
A general class of Quasi-Newton update was proposed by Broden [1, 2, and 14].
(5)
Where
(6)
Vol.30; No.4.| 2022
Page | 106
info@journalofbabylon.com | jub@itnet.uobabylon.edu.iq | www.journalofbabylon.com ISSN: 2312-8135 | Print ISSN: 1992-0652
(7)
(8)
Al-Baali [4 and 18] and Gradient (2009) [13, 15-17] show that the performance of the
BFGS method can be improved if modified before updating to the damped technique.
(9)
Where is a parameter chosen appropriately large in the interval (0, 1], the resulting
damped (D)-BFGS method is proposed by Powell [9, 18] for the Lagrangian function in
constrained optimization and used many times with only values of see for
example (Feltcher 1987, Nocedal and wright 1999)[7] the aim of this paper is to the new
damped technique with small values of can be used to modify the BFGS method for
unconstrained optimization. Illustrated this possibility for several numerical
optimization problems using non-monotone line search.
Materials and Methods
New monotone line search
The order to analyze the convergence of our algorithm we need the following assumptions
H1: The objective function is continuously differentiable and has a lower bound on ,
H2: the gradient of is Lipschitz continuously differentiable on an
open convex set that contains the level set with
given i.e there exist an such that
(10)
Since L is usually not known a priori in practical computation but it plays an
important role in algorithm design. We need to estimate it for the new non-
monotone line search same approach for estimating L as proposed [11-13].
If is a Lipschitz constant. However, a very large Lipschitz constant can lead to a
very small step size and makes damped Quasi-Newton methods with the new non-
monotone line search converge very slowly, therefore we should see Lipschitz
constants that are as small as possible in practical computation, in the iteration
we take respectively the approximate Lipschitz constant as
(11)
Where
Vol.30; No.4.| 2022
Page | 107
info@journalofbabylon.com | jub@itnet.uobabylon.edu.iq | www.journalofbabylon.com ISSN: 2312-8135 | Print ISSN: 1992-0652
and (12)
With and being a large positive number in which the new non-monotone
line search is used in the practical computation. Their global convergence and
convergence rate will be given in the subsequent section. New non-monotone line
search. Given, and, set
,
and
is the largest in such that
(13)
And
(14)
Where
(15)
and is estimated by (11) respectively.
New modified of the Damped Quasi-Newton Method
Quasi-Newton (QN) methods are recognized today as one of the most efficient ways to solve
nonlinear unconstrained optimization problems these methods are mostly used when the second
derivative matrix of the objective function is either unavailable or too costly to compute, a general
class of Quasi-Newton update was proposed by Broden[2 and 14].
(16)
(17)
Where
Where is a parameter, There are three popular choices of [9, 8 and 14].
To improve the performance of the QN update, Biggs [7] proposed to choose to satisfy the
following modified equation
Vol.30; No.4.| 2022
Page | 108
info@journalofbabylon.com | jub@itnet.uobabylon.edu.iq | www.journalofbabylon.com ISSN: 2312-8135 | Print ISSN: 1992-0652
(18)
Where is a scaling parameter.
Motivation and New Quasi-Newton Algorithm (MQ)
Now we describe the algorithm of the proposed method as:
Step 1: choose and set,
Step 2: if then stop else,
Step 3: set where is defined by
and is defined by the new non-monotonic line search (13),
Step 4: set go to step 3.
Global Convergence
Lemma a: Assume that (H1) and (H2) hold and algorithm MQ with the new non-monotone line search
generated on an infinite sequence then there exist and such that
(19)
Proof: Obviously and we can take for (11), we have
(20)
By letting we complete the proof
Lemma b: Assume that (HH1) and (H2) holds, and algorithm MQ with the new non-monotone line
search generates an infinite sequence if
and
(21)
Then
Proof: by the two inequalities and Cauchy – Schwartz inequality, we have
(22)
(23)
(24)
(25)
Vol.30; No.4.| 2022
Page | 109
info@journalofbabylon.com | jub@itnet.uobabylon.edu.iq | www.journalofbabylon.com ISSN: 2312-8135 | Print ISSN: 1992-0652
(26)
(27)
Therefore
(28)
And thus
-c
Numerical results and comparisons
In this work, we have n tendency to compare our new algorithm with the same classical
strategies like [7] by exploiting unconstrained nonlinear optimization problems the
functions obtained from Andrei [5 and 6] Waziri and Sabiu (2015)[10], and La couzetul
(2004)[3]. The numerical experiments demonstrate the performance of the proposed
method. We selected seven relatively unconstrained problems with the size varies from
10 to 100. We consider three sizes of each problem so that the total number of problem is
21 test problems. We stop the iteration when is satisfied All codes were
written in Matlab R2017a and run on a pc with Intel COREi4 with a processor with 4GB
of Ram and CPU 2.3GHZ we solved test problems using two different initial starting
points.
PROBLEMS
Generally, each problem should be declared and fully stated:
Problem (1) the strictly convex function
Problem (2) the exponential function
Problem (3) The Tridiagonal system
Vol.30; No.4.| 2022
Page | 110
info@journalofbabylon.com | jub@itnet.uobabylon.edu.iq | www.journalofbabylon.com ISSN: 2312-8135 | Print ISSN: 1992-0652
Problem (4) The Generalized function of Rosenbrock
Problem (5) generalized Oren and predicate function
Problem (6) the variable bond function
+1
;
,
Problem (7) the General Penall Function
Vol.30; No.4.| 2022
Page | 111
info@journalofbabylon.com | jub@itnet.uobabylon.edu.iq | www.journalofbabylon.com ISSN: 2312-8135 | Print ISSN: 1992-0652
Table 1. Numerical comparison of the other algorithm and new algorithm
problem
New algorithm (MQ)
Other
N
NOI
Time
NOF
NOI
Time
NOF
1
10
9
0.0133
18
34
0.1562
69
50
12
0.0123
23
40
0.0875
82
100
14
0.0266
28
44
0.1397
94
2
10
21
0.0247
48
28
0.0331
56
50
18
0.0194
36
15
0.1768
32
100
14
0.0015
28
14
0.0026
29
3
10
673
1.1453
1352
-
-
-
50
560
1.0348
1120
1530
3.2361
3160
100
761
1.3753
1682
1517
3.5601
3034
4
10
21
0.0471
42
30
0.0416
67
50
66
0.0638
120
42
0.0794
89
100
21
0.0395
44
44
0.0729
33
5
10
71
0.1526
152
-
-
-
50
82
0.2893
174
-
-
-
100
114
0.5643
226
-
-
-
6
10
26
0.0304
52
78
0.1840
156
50
42
0.5607
86
92
1.4105
184
100
38
0.347
76
100
8.9049
202
7
10
25
0.4078
50
7
0.0227
14
50
108
0.1307
116
-
-
-
100
32
0.0758
64
16
0.0784
32
The tables above represented the numerical comparison of the other algorithm[10]
and the New algorithm (MQ) in terms of NOI and NOF and CPU time in seconds,
however from Table (1) the new algorithm has less NOI, NOF, and CPU time in most of
the problems, this is due to the good selection of the line search.
Vol.30; No.4.| 2022
Page | 112
info@journalofbabylon.com | jub@itnet.uobabylon.edu.iq | www.journalofbabylon.com ISSN: 2312-8135 | Print ISSN: 1992-0652
Table 2. Numerical Comparison of the probability of other algorithms and
probability of new algorithm
Probability of New algorithm (MQ)
Probability of other algorithms
Probability
of NOI
Probability
of time
Probability
of NOF
Probability
of NOI
Probability
of time
Probability
of NOF
0.003299
0.00209
0.003251
0.009364
0.008589
0.00941
0.004399
0.001933
0.004154
0.011016
0.004811
0.011182
0.005132
0.004181
0.005057
0.012118
0.007682
0.012819
0.007698
0.003882
0.008669
0.007711
0.00182
0.007637
0.006598
0.003049
0.006502
0.004131
0.009721
0.004364
0.005132
0.000236
0.005057
0.003856
0.000143
0.003955
0.246701
0.180016
0.244176
0
0
0
0.205279
0.162648
0.202276
0.421372
0.17794
0.430929
0.278959
0.216167
0.303775
0.417791
0.195755
0.413746
0.007698
0.007403
0.007585
0.008262
0.002287
0.009137
0.024194
0.010028
0.021672
0.011567
0.004366
0.012137
0.007698
0.006209
0.007947
0.012118
0.004008
0.0045
0.026026
0.023985
0.027452
0
0
0
0.030059
0.045472
0.031425
0
0
0
0.041789
0.088696
0.040816
0
0
0
0.009531
0.004778
0.009391
0.021482
0.010117
0.021274
0.015396
0.08813
0.015532
0.025337
0.077558
0.025092
0.01393
0.054541
0.013726
0.027541
0.489643
0.027547
0.009164
0.064097
0.00903
0.001928
0.001248
0.001909
0.039589
0.020543
0.02095
0
0
0
0.01173
0.011914
0.011559
0.004406
0.004311
0.004364
Moreover, Figures (1) and (2) are comparisons using the performance profile of the new
algorithm (MQ) both in the estimate of NOI and NOF and the CPU time as the dimension
increases this also shows the advantages of the three-term combination and also the
logical selection of the parameter.
Vol.30; No.4.| 2022
Page | 113
info@journalofbabylon.com | jub@itnet.uobabylon.edu.iq | www.journalofbabylon.com ISSN: 2312-8135 | Print ISSN: 1992-0652
Figure 1. Comparison of NOI for 7 problems at dimensions 10, 50, and 100.
Figure 2. Comparison of time for 7 problems at dimensions 10, 50, and 100.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
12345678910 11 12 13 14 15 16 17 18 19 20 21
probablety of NOI
Test of dimension functions
NOI
NOI other
0
0.1
0.2
0.3
0.4
0.5
0.6
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
probablety of time
Test of dimension functions
time
time other
Vol.30; No.4.| 2022
Page | 114
info@journalofbabylon.com | jub@itnet.uobabylon.edu.iq | www.journalofbabylon.com ISSN: 2312-8135 | Print ISSN: 1992-0652
Figure 3. Comparison of NOF for 7 problems at dimensions 10, 50, and 100.
Conflict of interests.
There are non-conflicts of interest.
References
[1] A. Antonio and S. Lu Wu,” Practical optimization algorithms and engineering applications”, journal
of spring science and business media, 2017.
[2] C.G. Broden, “Quasi-Newton methods and their application to function minimization”, Math. ,
Comp., Vol. 21, pp. 368-381, 1967.
[3] W. La Couz, J.M. Matineoz, and M. Roydan, “Spectral residual method without gradient
information for solving large-scale nonlinear systems”, theory and experiments.
[4] M. Al-Baali, E. Spedicato, and F. Maggione,” Broyden’s Quasi-Newton methods for a nonlinear
system of equations and unconstrained optimization”, optimization methods, Software, Vol.29, No5,
937-954 2014.
[5] N. Andrei, “An constrained optimization test function collection”, advanced modeling and
optimization, the Electronic International Journal, vol.10, no.10, pp.147-161, 2008
[6] N. Andrei,” Open problems in nonlinear conjugate gradient algorithms for unconstrained
optimization”, Bulletin of the Malaysian Mathematical Science society second series, Vol. 34, no.2,
pp.319-330, 2011.
[7] J. Nocedal and S.J. Wright, “Numerical Optimization”, Springer series in operation research 2nd
edition Springer village, New York, 2006.
[8] K.H. Phua and S.B. Chew, “Symmetric rank-one update and quasi Newton methods”, lni Phua Et
Al K. H, Edes, optimization techniques and Applications, world scientific-Singapore, pp.52-63,
1992.
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
12345678910 11 12 13 14 15 16 17 18 19 20 21
probablety of NOF
Test of dimension functions
NOF
NOF other
Vol.30; No.4.| 2022
Page | 115
info@journalofbabylon.com | jub@itnet.uobabylon.edu.iq | www.journalofbabylon.com ISSN: 2312-8135 | Print ISSN: 1992-0652
[9] M.J.D. Powell,” How bad are the BFGS and DFP method when the objective function is quadratic,
Mathematical programming, Vol.34, pp.34-47, 1986.
[10] M. Y. Waziri and d. Sabi'u,” An alternative conjugate gradient method and its global convergence
for solving symmetric nonlinear equation”, International Journal of Mathematics and Mathematical
Science 2015, Article ID 961487, 8 pages, 2016.
[11] Z.J. Shi and J. Shen,” Convergence of decent method without line search”, Appl. Math. Computer.
167pp. 94-107, 2005.
[12] Z. J. Shi and J. Shen,” step-size estimation for unconstrained optimization methods”, compute.
Appli-Math24 (3), pp.33-416, 2005.
[13] Z.J. Shi and J. J. Goo,” A new family of conjugate gradient method”, Computational and Applied
Mathematics 224, pp. 444-457, 2009.
[14] Th. S. Ch. Hamsa, I. A Huda. , T. H. Eman, and Y. Al. Abbas, “A new modification of the quasi-
newton method for unconstrained optimization”, Indonesian Journal of Electrical Engineering and
Computer Science Vol. 21, No. 3, March 2021, pp. 1683~1691 ISSN: 2502-4752, DOI:
10.11591/ijeecs.v21.i3.pp1683-1691.
[15] G. Yuan, Z. Sheng, B. Wang, W. Hu, and C. Li, “The global convergence of a modified BFGS
method for non-convex functions”. J. Comput. Appl. Math. 327, 274–294, 2018.
[16] Y. Dai, J. Yuan, Y. Yuan,” Modified two-point step size Gradient methods for unconstrained
optimization. Compute”. Optim. Appl. 22(3), 103–109, 2002.
[17] I.A.R. Moghrabi,” A non-Secant quasi-Newton method for unconstrained nonlinear optimization”.
Cogent Eng. 9, 20–36, 2022.
[18] G. Yuan, Z. Wang, and P. Li, “A modified Broyden family algorithm with global convergence
under a weak Wolfe-Powell line search for unconstrained non-convex problems”. Calcolo 57, 35–
47, 2020.