Wah June Leong

Wah June Leong
Universiti Putra Malaysia | UPM · Department of Mathematics

B.Sc, MS, PhD

About

127
Publications
22,548
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,090
Citations
Additional affiliations
October 2020 - February 2021
Universiti Putra Malaysia
Position
  • Professor (Full)

Publications

Publications (127)
Article
We propose a new monotone algorithm for unconstrained optimization in the frame of Barzilai and Borwein (BB) method and analyze the convergence properties of this new descent method. Motivated by the fact that BB method does not guarantee descent in the objective function at each iteration, but performs better than the steepest descent method, we t...
Article
Full-text available
Two basic disadvantages of the symmetric rank one (SR1) update are that the SR1 update may not preserve positive definiteness when starting with a positive definite approximation and the SR1 update can be undefined. A simple remedy to these problems is to restart the update with the initial approximation, mostly the identity matrix, whenever these...
Article
Full-text available
This paper presents a class of low memory quasi-Newton methods with standard backtracking line search for large-scale unconstrained minimization. The methods are derived by means of least change updating technique analogous to that for the DFP method except that the full quasi-Newton matrix has been replaced by some diagonal matrix. We establish co...
Article
Full-text available
Gradient methods are popular due to the fact that only gradient of the objective function is required. On the other hand, the methods can be very slow if the objective function is very ill-conditioned. One possible reason for the inefficiency of the gradient methods is that a constant criterion, which aims only at reducing the function value, has b...
Article
Full-text available
This paper deals with the global stability for some composite stochastic control systems with nontrivial solutions by means of dynamic feedback laws. In particular, we establish feedback law for global asymptotic stabilization of a control subsystem deduced from the composite stochastic system and apply the result to stabilize the original composit...
Article
Full-text available
In this paper, we propose a sparse equity portfolio optimization model that aims at minimizing transaction cost by avoiding small investments while promoting diversification to help mitigate the volatility in the portfolio. The former is achieved by including the ℓ0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{...
Article
We will tackle the l0-norm sparse optimization problem using an underdetermined system as a constraint in this research. This problem is turned into an unconstrained optimization problem using the Lagrangian method and solved using the proximal variable metric method. This approach combines the proximal and variable metric methods by substituting a...
Article
Full-text available
This study reported the ecological risks and human health risk assessments of five potentially toxic metals in the topsoils of six land uses in Peninsular Malaysia. It was found that industry, landfill, rubbish heap, and mining areas were categorized as “very high ecological risk”. The land uses of industry, landfill and rubbish heap were found to...
Article
Full-text available
This article examines an economic growth model that expresses the interaction between production, technology stock, and research and development (R&D) investments. The goal of this study is to maximize production. Considering the presence of Gaussian white noises, this model is reformulated as a stochastic optimal control problem, where the R&D inv...
Article
Full-text available
The present study investigated the antioxidant enzyme activities (AEA) of ascorbate peroxidase (APX), catalase (CAT), guaiacol peroxidase (GPX), and superoxide dismutase (SOD) as biomarkers of Cu and Pb stress by using Centella asiatica grown in an experimental hydroponic condition. The results showed (i) higher accumulations of Cu and Pb in the ro...
Preprint
In this paper, we propose a sparse equity portfolio optimization (SEPO) based on the mean-variance portfolio selection model. Aimed at minimizing transaction cost by avoiding small investments, this new model includes $\ell_0$-norm regularization of the asset weights to promote sparsity, hence the acronym SEPO-$\ell_0$. The selection model is also...
Article
Given a nonconvex minimization problem where the objective function is nonlinear and twice differentiable. To gain more information about the objective function, it is essential to obtain all its stationary points and study the behaviour of these points. Since many nonlinear functions are expressible as polynomials via interpolation, there is a nee...
Article
Full-text available
In this paper, we propose to use spectral proximal method to solve sparse optimization problems. Sparse optimization refers to an optimization problem involving the ι 0 -norm in objective or constraints. The previous research showed that the spectral gradient method is outperformed the other standard unconstrained optimization methods. This is due...
Article
This paper proposes a nonmonotone spectral gradient method for solving large-scale unconstrained optimization problems. The spectral parameter is derived from the eigenvalues of an optimally sized memoryless symmetric rank-one matrix obtained under the measure defined as a ratio of the determinant of updating matrix over its largest eigenvalue. Cou...
Article
This paper investigates the exponential stability of some interconnected stochastic control systems with non-trivial equilibria, for which the considered interconnected systems are induced by the composition of some stochastic subsystems. Of particular interest is the notion of stability with respect to a set containing the non-trivial equilibria....
Article
A secant equation (quasi-Newton) has one of the most important rule to find an optimal solution in nonlinear optimization. Curvature information must satisfy the usual secant equation to ensure positive definiteness of the Hessian approximation. In this work, we present a new diagonal updating to improve the Hessian approximation with a modifying w...
Article
Full-text available
Conjugate gradient methods play an important role in many fields of application due to their simplicity, low memory requirements, and global convergence properties. In this paper, we propose an efficient three-term conjugate gradient method by utilizing the DFP update for the inverse Hessian approximation which satisfies both the sufficient descent...
Article
In this work, we present a new class of diagonal quasi-Newton methods for solving large-scale unconstrained optimization problems. The methods are derived by means of variational principle under the generalized Frobenius norm. We show global convergence of our methods under the standard line search with Armijo condition. Numerical results are carri...
Conference Paper
The technique of nonmontone line search has received much attention in nonlinear optimization. This technique can improve the computational cost of the line search process and increase the rate of convergence of the algorithm. However, the convergence of this line search scheme utilizes some rather restrictive assumption concerning the search direc...
Conference Paper
Quasi-Newton method has been widely used in solving unconstrained optimization problems. The popularity of this method is due to the fact that only the gradient of the objective function is required at each iterate. Since second derivatives (Hessian) are not required, quasi-Newton method is sometimes more efficient than the Newton method, especiall...
Article
At present, the ability to promote national economy by adjusting to political, economic, and technological variables is one of the largest challenges faced by organization productivity. This challenge prompts changes in structure and line productivity, given that cash has not been invested. Thus, the management searches for investment opportunities...
Research
The primary goal of the sensitivity analysis is to determine exactly what are the acts of slight �differences in the parameters imposed on the overall solution. When you make these �differences are minor �differences, and the optimal solution is sensitive to the change in these assumptions.
Article
Full-text available
Fish tilapia Oreochromis mossambicus were collected from a contaminated Seri Serdang (SS) pond potentially receiving domestic effluents and an uncontaminated pond from Universiti Putra Malaysia (UPM). The fish were dissected into four parts namely gills, muscles, intestines, and liver. All the fish parts were pooled and analyzed for the concentrati...
Article
We study the convergence properties of a class of low memory methods for solving large-scale unconstrained problems. This class of methods belongs to that of quasi-Newton family, except for which the approximation to Hessian, at each step, is updated by means of a diagonal matrix. Using appropriate scaling, we show that the methods can be implement...
Article
Full-text available
Lyapunov-like characterization for the problem of input-to-state stability in the probability of nonautonomous stochastic control systems is established. We extend the well-known Artstein-Sontag theorem to derive the necessary and sufficient conditions for the input-to-state stabilization of stochastic control systems. Illustrating example is provi...
Article
Full-text available
The divisible load scheduling is a paradigm in the area of distributed computing. The traditional divisible load theory is based on the fact that, the communications and computations are obedient and do not cheat the algorithm. The literature of review shows that the divisible load model fail to achieve its optimal performance, if the processors do...
Article
In this paper, we suggest a numerical method based upon hybrid of Chebyshev wavelets and finite difference methods for solving well-known nonlinear initial-value problems of Lane-Emden type. The useful properties of the Chebyshev wavelets and finite difference method are utilized to reduce the computation of the problem to a set of nonlinear algebr...
Article
In this paper, we make a modification to the standard conjugate gradient method so that its search direction satisfies the sufficient descent condition. We prove that the modified conjugate gradient method is globally convergent under Armijo line search. Numerical results show that the proposed conjugate gradient method is efficient compared to som...
Article
In this paper, we investigate the possible use of control theory, particularly theory on optimal control to derive some numerical methods for unconstrained optimization problems. Based upon this control theory, we derive a Levenberg-Marquardt-like method that guarantees greatest descent in a particular search region. The implementation of this meth...
Article
The iterative solution of unconstrained optimization problems has been found in a variety of significant applications of research areas, such as image restoration. In this paper, we present an efficient limited memory quasi-Newton technique based on symmetric rank-one updating formula to compute meaningful solutions for large-scale problems arising...
Article
Full-text available
In this paper we present a new line search method known as the HBFGS method, which uses the search direction of the conjugate gradient method with the quasi-Newton updates. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) update is used as approximation of the Hessian for the methods. The new algorithm is compared with the BFGS method in terms of iterat...
Article
Full-text available
Conjugate gradient methods have played a useful and powerful role for solving large-scale optimization problems which has become more interesting and essential in many disciplines such as in engineering , statistics, physical sciences, social and behavioral sciences among others. In this paper, we present an application of a proposed three-term con...
Conference Paper
Recently, subspace quasi-Newton (SQN) method has been widely used in solving large scale unconstrained optimization. Besides constructing sub-problems in low dimensions so that the storage requirement as well as computational cost can be reduced, it can also be implemented extremely fast when the objective function is a combination of computational...
Conference Paper
Full-text available
In this paper we present a new search direction known as the CG-BFGS method, which uses the search direction of the conjugate gradient method approach in the quasi-Newton methods. The new algorithm is compared with the quasi-Newton methods in terms of the number of iterations and CPU-time. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is used...
Article
In this paper, we propose a hybrid ODE-based quasi-Newton (QN) method for unconstrained optimization problems, which combines the idea of low-order implicit Runge–Kutta (RK) techniques for gradient systems with the QN type updates of the Jacobian matrix such as the symmetric rank-one (SR1) update. The main idea of this approach is to associate a QN...
Article
Full-text available
In solving large scale problems, the quasi-Newton method is known as the most efficient method in solving unconstrained optimization problems. Hence, a new hybrid method, known as the BFGS-CG method, has been created based on these properties, combining the search direction between conjugate gradient methods and quasi-Newton methods. In comparison...
Article
Full-text available
In this paper, we propose a three-term conjugate gradient method via the symmetric rank-one update. The basic idea is to exploit the good properties of the SR1 update in providing quality Hessian approximations to construct a conjugate gradient line search direction without the storage of matrices and possess the sufficient descent property. Numeri...
Article
Full-text available
Numerical methods to solve unconstrained optimization problems may be viewed as control systems. An important principle in dynamic control system theory is that control policies should be prescribed in a feedback manner rather than in an open loop manner. This is to ensure that the outcomes are not sensitive to small errors in the state variables....
Article
Subspace quasi-Newton (SQN) method has been widely used in large scale unconstrained optimization problem. Its popularity is due to the fact that the method can construct subproblems in low dimensions so that storage requirement as well as the computation cost can be minimized. However, the main drawback of the SQN method is that it can be very slo...
Article
Symmetric rank-one update (SR1) is known to have good numerical performance among the quasi-Newton methods for solving unconstrained optimization problems as evident from the recent study of Farzin et al. (2011), However, it is well known that the SR1 update may not preserve positive definiteness even when updated from a positive definite approxima...
Article
Full-text available
The conjugate gradient method plays an important role in solving large-scaled problems and the quasi-Newton method is known as the most efficient method in solving unconstrained optimization problems. Therefore, in this paper, the new hybrid method between the conjugate gradient method and the quasi-newton method for solving optimization problem is...
Article
The nonlinear conjugate gradient (CG) methods have widely been used in solving unconstrained optimization problems. They are well-suited for large-scale optimization problems due to their low memory requirements and least computational costs. In this paper, a new diagonal preconditioned conjugate gradient (PRECG) algorithm is designed, and this is...
Data
Full-text available
Divisible load theory has become a popular area of research during the past two decades. Based on divisible load theory the computa-tions and communications can be divided into some arbitrarily indepen-dent parts and each part can be processed independently by a processor. Existing divisible load scheduling algorithms do not consider any prior-ity...
Article
Full-text available
Divisible load theory has become a popular area of research during the past two decades. Based on divisible load theory the computations and communications can be divided into some arbitrarily independent parts and each part can be processed independently by a processor. Existing divisible load scheduling algorithms do not consider any priority for...
Article
Full-text available
The stabilization of stochastic differential control systems by means of dynamic feedback laws is provided. We extend the well-known Artstein–Sontag theorem to derive the necessary and sufficient conditions for the dynamic robust stabilization of stochastic differential systems. An explicit formula for feedback law exhibiting dynamic robust stabili...
Article
Full-text available
Sufficient conditions for the exponential input-to-state stability in probability in rth mean and for the almost sure exponential input-to-state stability in probability of a composite stochastic system are established. Illustrating example is provided to validate our results. MSC: 60H10, 93C10, 93D05, 93D15, 93D21, 93E15.
Article
Full-text available
The concepts of stability in probability of nontrivial solutions for stochastic nonlinear systems are analyzed in terms of a control Lyapunov function which is smooth except possibly at the origin. We show under certain hypothesis that the neighborhood of the origin is stable in probability. An illustrating example is provided. MSC: 60H10, 93C10,...
Article
Full-text available
This paper explores the stability of general line search methods in the sense of Lyapunov, for minimizing a smooth nonlinear function. In particular we give sufficient conditions for a line search method to be globally asymptotical stable. Our analysis suggests that the proposed sufficient conditions for asymptotical stability is equivalent to the...
Article
Full-text available
We define and analyse partial Newton iterations for the solutions of a system of algebraic equations. Firstly, we focus on a linear system of equations which does not require a line search. To apply a partial Newton method to a system of nonlinear equations we need a line search to ensure that the linearized equations are valid approximations of th...
Conference Paper
Full-text available
. In this paper, we suggested a simple scaling on BFGS-SD method for solving unconstrained optimization problems. For general convex functions, we prove that the method is globally convergent with inexact line searches. The results will be obtained by implementing algorithms in Matlab for the test problem. The comparison with the original BFGS...
Article
One of the well-known methods in solving large scale unconstrained optimization is limited memory quasi-Newton (LMQN) method. This method is derived from a subproblem in low dimension so that the storage requirement as well as the computation cost can be reduced. In this paper, we propose a preconditioned LMQN method which is generally more effecti...
Article
Full-text available
The asymptotical and practical stability in probability of stochastic control systems by means of feedback laws is provided. The main results of this work enable us to derive the sufficient conditions for the existence of control Lyapunov function that play a leading role in the existence of stabilizing feedback laws. Particularly, the sufficient...
Article
Full-text available
We present a new gradient method that uses scaling and extra updating within the diagonal updating for solving unconstrained optimization problem. The new method is in the frame of Barzilai and Borwein (BB) method, except that the Hessian matrix is approximated by a diagonal matrix rather than the multiple of identity matrix in the BB method. The m...
Article
Full-text available
New results for exponential stability in probability of a composite stochastic control system are established. The main results of this paper enable us to derive sufficient conditions for exponential stability in -th mean and almost sure exponential stability in probability of composite stochastic control system. Two numerical examples are given to...
Article
The main focus of this paper is to derive new diagonal updating scheme via the direct weak secant equation. This new scheme allows us to improve the accuracy of the Hessian’s approximation and is also capable to utilize information gathered about the function in previous iterations. It follows by an scaling approach that employs scaling parameter b...
Article
Full-text available
We present a new gradient method that uses scaling and extra updating within the diagonal updating for solving unconstrained optimization problem. The new method is in the frame of Barzilai and Borwein (BB) method, except that the Hessian matrix is approximated by a diagonal matrix rather than the multiple of identity matrix in the BB method. The m...
Article
Full-text available
This paper proposes some diagonal matrices that approximate the (inverse) Hessian by parts using the variational principle that is analogous to the one employed in constructing quasi-Newton updates. The way we derive our approximations is inspired by the least change secant updating approach, in which we let the diagonal approximation be the sum of...
Article
Full-text available
In this study, we extend the technique of Waziri et al. (2010a) via incorporating the two-step scheme in the framework of the diagonal Jacobian updating method to solve large-scale systems of nonlinear equations. In this approach we used points from two previous steps unlike one step approach in most Newton's-like methods. The anticipation has been...
Article
Quasi-Newton (QN) methods are generally held to be the most efficient minimization methods for solving unconstrained optimization problems. Among the QN methods, symmetric rank-one (SR1) is one of the very competitive formulas. In the present paper, we propose a new SR1 method. The new technique attempts to improve the quality of the SR1 Hessian by...
Article
Nonlinear conjugate gradient (CG) methods have played an important role in solving large-scale unconstrained optimization. Their wide application in many fields is due to their low memory requirements and global convergence properties. Numerous studies and modifications have been conducted recently to improve this method. In this paper, a new class...
Article
Full-text available
This paper focuses on developing diagonal gradient-type methods that employ accumulative approach in multistep diagonal updating to determine a better Hessian approximation in each step. The interpolating curve is used to derive a generalization of the weak secant equation, which will carry the information of the local Hessian. The new parameteriza...
Article
Full-text available
We propose an approach to enhance the performance of a diagonal variant of secant method for solving large-scale systems of nonlinear equations. In this approach, we consider diagonal secant method using data from two preceding steps rather than a single step derived using weak secant equation to improve the updated approximate Jacobian in diagonal...
Article
Full-text available
The performance of a genetic algorithm is dependent on the genetic operators, in general, and on the type of crossover operator, in particular. The population diversity is usually used as the performance measure for the premature convergence. In this paper, a fuzzy genetic algorithm is proposed for solving binary encoded combinatorial optimization...
Article
Full-text available
Diagonal quasi-Newton (DQN) methods are a class of quasi-Newton method which alter the standard quasi-Newton updates of approximations to the Hes-sian or its inverse to diagonal updating matrices. Most often, the updating formulae for this class of methods are derived by the variational approach. A major drawback under this approach is that the der...
Article
A major weakness of the limited memory BFGS (LBFGS) method is that it may converge very slowly on ill-conditioned problems when the identity matrix is used for initialization. Very often, the LBFGS method can adopt a preconditioner on the identity matrix to speed up the convergence. For this purpose, we propose a class of diagonal preconditioners t...
Article
Diagonal quasi-Newton (DQN) methods are a class of quasi-Newton methods which alter the standard quasi-Newton updates of approximations to the Hessian or its inverse to diagonal updating matrices. Most often, the updating formulae for this class of methods are derived by the variational approach. A major drawback under this approach is that the der...
Article
Full-text available
Conjugate gradient (CG) methods have played an important role in solving large-scale unconstrained optimization due to its low memory requirements and global convergence properties. Numerous studies and modifications have been devoted recently to improve this method. In this paper, a new modification of conjugate gradient coefficient (k β) with glo...
Article
One of the widely used methods for solving a nonlinear system of equations is the quasi-Newton method. The basic idea underlining this type of method is to approximate the solution of Newton’s equation by means of approximating the Jacobian matrix via quasi-Newton update. Application of quasi-Newton methods for large scale problems requires, in pri...
Article
This paper concerns the memoryless quasi-Newton method, that is precisely the quasi-Newton method for which the approximation to the inverse of Hessian, at each step, is updated from the identity matrix. Hence its search direction can be computed without the storage of matrices. In this paper, a scaled memoryless symmetric rank one (SR1) method for...
Article
In this paper, we investigate a symmetric rank-one (SR1) quasi-Newton (QN) formula in which the Hessian of the objective function has some special structure. Instead of approximating the whole Hessian via the SR1 formula, we consider an approach which only approximates part of the Hessian matrix that is not easily acquired. Although the SR1 update...
Article
In this paper, we present a new symmetric rank-one (SR1) method for the solution of unconstrained optimization problems. The proposed method involves an algorithm in which the usual SR1 Hessian is updated a number of times in a way to be specified in some iterations, to improve the performance of the Hessian approximation.In particular, we discuss...
Article
The basic requirement of Newton’s method in solving systems of nonlinear equations is that the Jacobian must be non-singular. This condition restricts to some extent the application of Newton method. In this paper we present a modification of Newton’s method for systems of nonlinear equations where the Jacobian is singular. This is made possible by...
Article
In this paper, we propose an improved multi-step diagonal updating method for large scale unconstrained optimization. Our approach is based on constructing a new gradient-type method by means of interpolating curves. We measure the distances required to parameterize the interpolating polynomials via a norm defined by a positive-definite matrix. By...
Article
Full-text available
In this study we present two approaches of time-cost trade-off to complete the project within T (the shortest possible duration to complete the project at least cost within the maximum available budget). The aim of this study is to discuss a comparison between the approach of Crashing Critical Activities (CCA) and the approach of Stretching Noncrit...
Article
We propose some improvements on a diag- onal Newton's method for solving large-scale systems of nonlinear equations. In this approach, we use data from two preceding steps to improve the current approximate Jacobian in diagonal form. Via this approach, we can achieve a higher order of accuracy for Jacobian approx- imation when compares to other exi...
Article
The Barzilai–Borwein (BB) gradient method is favourable over the classical steepest descent method both in theory and in real computations. This method takes a ‘fixed’ step size rather than following a set of line search rules to ensure convergence. Along this line, we present a new approach for the two-point approximation to the quasi-Newton equat...
Article
Symmetric rank-one (SR1) is one of the competitive formulas among the quasi-Newton (QN) methods. In this paper, we propose some modified SR1 updates based on the modified secant equations, which use both gradient and function information. Furthermore, to avoid the loss of positive definiteness and zero denominators of the new SR1 updates, we apply...
Article
Many researches attempt to improve the efficiency of the usual quasi-Newton (QN) methods by accelerating the performance of the algorithm without causing more storage demand. They aim to employ more available information from the function values and gradient to approximate the curvature of the objective function. In this paper we derive a new QN me...
Article
Full-text available
The problems of radiative transfer give rise to interesting integral equations that must be faced with efficient numerical solver. Very often the integral equations are discretized to large-scale nonlinear equations and solved by Newton's-like methods. Generally, these kind of methods require the computation and storage of the Jacobian matrix or it...