C. A. Botsaris

University of Patras, Rhion, West Greece, Greece

Are you C. A. Botsaris?

Claim your profile

Publications (11)1.65 Total impact

  • John G. Chilas, C. A. Botsaris
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we tested the Displacement Effect Hypothesis for the case of Greece, in the Post World War II Period, using mainly the global dummy variables approach. The motive for the research was the resurgent debate among scientists and politicians concerning the large size of the public sector and its causes. To us, this development was mainly caused by certain exogenous distortions, which do not have sufficiently been analyzed in Greek literature. Two major disturbances were historically detected and subjected to statistical testing.
    Journal of Statistics and Management Systems. 06/2013; 6(3):371-389.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a nearly-exact method for the large scale trust region subproblem (TRS) based on the properties of the minimal-memory BFGS method. Our study is concentrated in the case where the initial BFGS matrix can be any scaled identity matrix. The proposed method is a variant of the Moré–Sorensen method that exploits the eigenstructure of the approximate Hessian B, and incorporates both the standard and the hard case. The eigenvalues of B are expressed analytically, and consequently a direction of negative curvature can be computed immediately by performing a sequence of inner products and vector summations. Thus, the hard case is handled easily while the Cholesky factorization is completely avoided. An extensive numerical study is presented, for covering all the possible cases arising in the TRS with respect to the eigenstructure of B. Our numerical experiments confirm that the method is suitable for very large scale problems.
    Optimization Letters 01/2011; 5:207-227. · 1.65 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a new matrix-free method for the computation of negative curvature directions based on the eigenstructure of minimal-memory BFGS matrices. We determine via simple formulas the eigenvalues of these matrices and we compute the desirable eigenvectors by explicit forms. Consequently, a negative curvature direction is computed in such a way that avoids the storage and the factorization of any matrix. We propose a modification of the L-BFGS method in which no information is kept from old iterations, so that memory requirements are minimal. The proposed algorithm incorporates a curvilinear path and a linesearch procedure, which combines two search directions; a memoryless quasi-Newton direction and a direction of negative curvature. Results of numerical experiments for large scale problems are also presented.
    Applied Mathematics and Computation. 01/2010;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a matrix-free method for the large scale trust region subproblem (TRS), assuming that the approximate Hessian is updated using a minimal-memory BFGS method, where the initial matrix is a scaled identity matrix. We propose a variant of the More-Sorensen method that exploits the eigenstructure of the approximate Hessian, and incorporates both the standard and the hard case. The eigenvalues and the corresponding eigenvectors are expressed analytically, and hence a direction of negative curvature can be computed immediately. The most important merit of the proposed method is that it completely avoids the factorization, and the trust region subproblem can be solved by performing a sequence of inner products and vector summations. Numerical results are also presented.
    Industrial Informatics, 2009. INDIN 2009. 7th IEEE International Conference on; 07/2009
  • G.E. Manoussakis, C.A. Botsaris, T.N.Grapsa
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a new algorithm for finding the unconstrained minimum of a continuously differentiable function f in n variables. This algorithm is based on a conic model function, which does not involve the conjugacy matrix or the Hessian of the model function. The conic method in this paper is combined with a non-monotone line search. The method does not guarantee descent in the objective function at each iteration. The use of the stopping criterion introduced by Grippo, Lampariello and Lucidi allows the objective function to increase at some iterations and still guarantees global convergence. The new algorithm has been implemented and tested on several well known test functions.
    Journal of Information and Optimization Sciences. 01/2008; 29(1):1-15.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a new matrix-free method for the computation of the negative curvature direction in large scale unconstrained problems. We describe a curvilinear method which uses a combination of a quasi-Newton direc- tion and a negative curvature direction. We propose an al- gorithm for the computation of the search directions which uses information of two speciflc L-BFGS matrices in such a way that avoids both the calculation and the storage of the approximate Hessian. Explicit forms for the eigenpair that corresponds to the most negative eigenvalue of the approx- imate Hessian are also presented. Numerical results show that the proposed approach is promising.
    01/2007;
  • G. E. Manoussakis, T. N. Grapsa, C. A. Botsaris
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we present a new algorithm for finding the unconstrained minimum of a twice--continuously di#erentiable function f(x) in n variables. This algorithm is based on a conic model function, which does not involve the conjugacy matrix or the Hessian of the model function. The basic idea in this paper is to accelerate the convergence of the conic method choosing more appropriate points x 1 , x 2 , . . . , x n+1 to apply the conic model. To do this, we apply in the gradient of f a dimension--reducing method (DR), which uses reduction to proper simpler one--dimensional nonlinear equations, converges quadratically and incorporates the advantages of Newton and Nonlinear SOR algorithms. The new method has been implemented and tested in well known test functions. It converges in n + 1 iterations on conic functions and, as numerical results indicate, rapidly minimizes general functions.
    07/2004;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we present a new algorithm for finding the unconstrained minimum of a continuously dierentiable function f(x) in n variables. This algorithm is based on a conic model function, which does not involve the conjugacy matrix or the Hessian of the model function. The conic method in this paper is combined with a non-monotone line search using the Barzilai and Borwein step. The method does not guarantee descent in the objective function at each iteration. Also, the choice of step length is related to the eigenvalues of the Hessian at the minimizer and not to the function value. The use of the stopping criterion introduced by Grippo, Lampariello and Lucidi allows the objective function to increase at some iterations and still guarantees global convergence. The new algorithm converges in n + 1 iterations on conic functions and, as numerical results indicate, rapidly minimizes general functions.
    10/2002;
  • Source
    C BOTSARIS
    [Show abstract] [Hide abstract]
    ABSTRACT: An algorithm is presented that minimizes a continuously differentiable function in several variables subject to linear inequality constraints. At each step of the algorithm an arc is generated along which a move is performed until either a point yielding a sufficient descent in the function value is determined or a constraint boundary is encountered. The decision to delite a constraint from the list of active constraints is based upon periodic estimates of the Kuhn-Tucker multipliers. The curvilinear search paths are obtained by solving a linear approximation to the differential equation of the continuous steepest descent curve for the objective function on the equality constrained region defined by the constraints which are required to remain binding. If the Hessian matrix of the objective function has certain properties and if the constraint gradients are linearly independent, the sequence generated by the algorithm converges to a point satisfying the Kuhn-Tucker optimality conditions at a rate that is at least quadratic.
    Journal of Mathematical Analysis and Applications - J MATH ANAL APPL. 01/1979; 71(2):482-515.
  • Source
    C BOTSARIS
    [Show abstract] [Hide abstract]
    ABSTRACT: An algorithm is presented that minimizes a nonlinear function in many variables under equality constraints by generating a monotonically improving sequence of feasible points along curvilinear search paths obeying an initialvalue system of differential equations. The derivation of the differential equations is based on the idea of a steepest descent curve for the objective function on the feasible region. Our method for small stepsize behaves as the generalized reduced gradient algorithm, whereas for large enough stepsize the constrained equivalent of Newton's method for unconstrained minimization is obtained.
    Journal of Mathematical Analysis and Applications - J MATH ANAL APPL. 01/1979; 69(2):372-397.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In a recent article, we introduced a method based on a conic model for unconstrained optimization. The acceleration of the convergence of this method was obtained by choosing more appropriate points in order to apply the conic model. In particular, we applied in the gradient of the objective function a dimension-reducing method for the numerical solution of a system of algebraic equations. In this work, we incorporate in the previous method the non-monotone Armijo line search, introduced by Grippo, Lampariello and Lucidi, combined with the Barzilai and Borwein steplength, in order to further accelerate the convergence. The new method does not guarantee descent in the objective function value at each iteration. Nevertheless, the use of this non-monotone line search allows the objective function to increase at some iteration without affecting the global convergence properties. The new method has been implemented and tested in well known test functions. It converges in n+1 iterations on conic functions and, as numerical results indicate, rapidly minimizes general functions.

Publication Stats

14 Citations
1.65 Total Impact Points

Top Journals

Institutions

  • 2010–2013
    • University of Patras
      • • Laboratory of Operations Research
      • • Department of Mathematics
      Rhion, West Greece, Greece
  • 2010–2011
    • University of Central Greece
      Lamia, Central Greece, Greece
  • 1979
    • Aristotle University of Thessaloniki
      • Division of Mathematics (MATH)
      Saloníki, Central Macedonia, Greece