C. A. Botsaris

University of Central Greece, Lamia, Central Greece, Greece

Are you C. A. Botsaris?

Claim your profile

Publications (8)1.65 Total impact

  • Source
    [show abstract] [hide abstract]
    ABSTRACT: We present a nearly-exact method for the large scale trust region subproblem (TRS) based on the properties of the minimal-memory BFGS method. Our study is concentrated in the case where the initial BFGS matrix can be any scaled identity matrix. The proposed method is a variant of the Moré–Sorensen method that exploits the eigenstructure of the approximate Hessian B, and incorporates both the standard and the hard case. The eigenvalues of B are expressed analytically, and consequently a direction of negative curvature can be computed immediately by performing a sequence of inner products and vector summations. Thus, the hard case is handled easily while the Cholesky factorization is completely avoided. An extensive numerical study is presented, for covering all the possible cases arising in the TRS with respect to the eigenstructure of B. Our numerical experiments confirm that the method is suitable for very large scale problems.
    Optimization Letters 01/2011; 5:207-227. · 1.65 Impact Factor
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: We present a new matrix-free method for the computation of negative curvature directions based on the eigenstructure of minimal-memory BFGS matrices. We determine via simple formulas the eigenvalues of these matrices and we compute the desirable eigenvectors by explicit forms. Consequently, a negative curvature direction is computed in such a way that avoids the storage and the factorization of any matrix. We propose a modification of the L-BFGS method in which no information is kept from old iterations, so that memory requirements are minimal. The proposed algorithm incorporates a curvilinear path and a linesearch procedure, which combines two search directions; a memoryless quasi-Newton direction and a direction of negative curvature. Results of numerical experiments for large scale problems are also presented.
    Applied Mathematics and Computation. 01/2010;
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: We present a matrix-free method for the large scale trust region subproblem (TRS), assuming that the approximate Hessian is updated using a minimal-memory BFGS method, where the initial matrix is a scaled identity matrix. We propose a variant of the More-Sorensen method that exploits the eigenstructure of the approximate Hessian, and incorporates both the standard and the hard case. The eigenvalues and the corresponding eigenvectors are expressed analytically, and hence a direction of negative curvature can be computed immediately. The most important merit of the proposed method is that it completely avoids the factorization, and the trust region subproblem can be solved by performing a sequence of inner products and vector summations. Numerical results are also presented.
    Industrial Informatics, 2009. INDIN 2009. 7th IEEE International Conference on; 07/2009
  • G.E. Manoussakis, C.A. Botsaris, T.N.Grapsa
    Journal of Information and Optimization Sciences. 01/2008; 29(1):1-15.
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: We present a new matrix-free method for the computation of the negative curvature direction in large scale unconstrained problems. We describe a curvilinear method which uses a combination of a quasi-Newton direc- tion and a negative curvature direction. We propose an al- gorithm for the computation of the search directions which uses information of two speciflc L-BFGS matrices in such a way that avoids both the calculation and the storage of the approximate Hessian. Explicit forms for the eigenpair that corresponds to the most negative eigenvalue of the approx- imate Hessian are also presented. Numerical results show that the proposed approach is promising.
    01/2007;
  • G. E. Manoussakis, T. N. Grapsa, C. A. Botsaris
    [show abstract] [hide abstract]
    ABSTRACT: In this paper we present a new algorithm for finding the unconstrained minimum of a twice--continuously di#erentiable function f(x) in n variables. This algorithm is based on a conic model function, which does not involve the conjugacy matrix or the Hessian of the model function. The basic idea in this paper is to accelerate the convergence of the conic method choosing more appropriate points x 1 , x 2 , . . . , x n+1 to apply the conic model. To do this, we apply in the gradient of f a dimension--reducing method (DR), which uses reduction to proper simpler one--dimensional nonlinear equations, converges quadratically and incorporates the advantages of Newton and Nonlinear SOR algorithms. The new method has been implemented and tested in well known test functions. It converges in n + 1 iterations on conic functions and, as numerical results indicate, rapidly minimizes general functions.
    07/2004;
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: In this paper we present a new algorithm for finding the unconstrained minimum of a continuously dierentiable function f(x) in n variables. This algorithm is based on a conic model function, which does not involve the conjugacy matrix or the Hessian of the model function. The conic method in this paper is combined with a non-monotone line search using the Barzilai and Borwein step. The method does not guarantee descent in the objective function at each iteration. Also, the choice of step length is related to the eigenvalues of the Hessian at the minimizer and not to the function value. The use of the stopping criterion introduced by Grippo, Lampariello and Lucidi allows the objective function to increase at some iterations and still guarantees global convergence. The new algorithm converges in n + 1 iterations on conic functions and, as numerical results indicate, rapidly minimizes general functions.
    10/2002;
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: In a recent article, we introduced a method based on a conic model for unconstrained optimization. The acceleration of the convergence of this method was obtained by choosing more appropriate points in order to apply the conic model. In particular, we applied in the gradient of the objective function a dimension-reducing method for the numerical solution of a system of algebraic equations. In this work, we incorporate in the previous method the non-monotone Armijo line search, introduced by Grippo, Lampariello and Lucidi, combined with the Barzilai and Borwein steplength, in order to further accelerate the convergence. The new method does not guarantee descent in the objective function value at each iteration. Nevertheless, the use of this non-monotone line search allows the objective function to increase at some iteration without affecting the global convergence properties. The new method has been implemented and tested in well known test functions. It converges in n+1 iterations on conic functions and, as numerical results indicate, rapidly minimizes general functions.

Publication Stats

3 Citations
192 Downloads
475 Views
1.65 Total Impact Points

Top Journals

Institutions

  • 2010–2011
    • University of Central Greece
      Lamia, Central Greece, Greece
    • University of Patras
      • Department of Mathematics
      Patrís, Kentriki Makedonia, Greece