C BOTSARIS

University of Patras, Rhion, West Greece, Greece

Are you C BOTSARIS?

Claim your profile

Publications (17)9.3 Total impact

  • John G. Chilas, C. A. Botsaris
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we tested the Displacement Effect Hypothesis for the case of Greece, in the Post World War II Period, using mainly the global dummy variables approach. The motive for the research was the resurgent debate among scientists and politicians concerning the large size of the public sector and its causes. To us, this development was mainly caused by certain exogenous distortions, which do not have sufficiently been analyzed in Greek literature. Two major disturbances were historically detected and subjected to statistical testing.
    Journal of Statistics and Management Systems. 06/2013; 6(3):371-389.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a nearly-exact method for the large scale trust region subproblem (TRS) based on the properties of the minimal-memory BFGS method. Our study is concentrated in the case where the initial BFGS matrix can be any scaled identity matrix. The proposed method is a variant of the Moré–Sorensen method that exploits the eigenstructure of the approximate Hessian B, and incorporates both the standard and the hard case. The eigenvalues of B are expressed analytically, and consequently a direction of negative curvature can be computed immediately by performing a sequence of inner products and vector summations. Thus, the hard case is handled easily while the Cholesky factorization is completely avoided. An extensive numerical study is presented, for covering all the possible cases arising in the TRS with respect to the eigenstructure of B. Our numerical experiments confirm that the method is suitable for very large scale problems.
    Optimization Letters 01/2011; 5:207-227. · 1.65 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a new matrix-free method for the computation of negative curvature directions based on the eigenstructure of minimal-memory BFGS matrices. We determine via simple formulas the eigenvalues of these matrices and we compute the desirable eigenvectors by explicit forms. Consequently, a negative curvature direction is computed in such a way that avoids the storage and the factorization of any matrix. We propose a modification of the L-BFGS method in which no information is kept from old iterations, so that memory requirements are minimal. The proposed algorithm incorporates a curvilinear path and a linesearch procedure, which combines two search directions; a memoryless quasi-Newton direction and a direction of negative curvature. Results of numerical experiments for large scale problems are also presented.
    Applied Mathematics and Computation 01/2010; · 1.35 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a matrix-free method for the large scale trust region subproblem (TRS), assuming that the approximate Hessian is updated using a minimal-memory BFGS method, where the initial matrix is a scaled identity matrix. We propose a variant of the More-Sorensen method that exploits the eigenstructure of the approximate Hessian, and incorporates both the standard and the hard case. The eigenvalues and the corresponding eigenvectors are expressed analytically, and hence a direction of negative curvature can be computed immediately. The most important merit of the proposed method is that it completely avoids the factorization, and the trust region subproblem can be solved by performing a sequence of inner products and vector summations. Numerical results are also presented.
    Industrial Informatics, 2009. INDIN 2009. 7th IEEE International Conference on; 07/2009
  • G.E. Manoussakis, C.A. Botsaris, T.N.Grapsa
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a new algorithm for finding the unconstrained minimum of a continuously differentiable function f in n variables. This algorithm is based on a conic model function, which does not involve the conjugacy matrix or the Hessian of the model function. The conic method in this paper is combined with a non-monotone line search. The method does not guarantee descent in the objective function at each iteration. The use of the stopping criterion introduced by Grippo, Lampariello and Lucidi allows the objective function to increase at some iterations and still guarantees global convergence. The new algorithm has been implemented and tested on several well known test functions.
    Journal of Information and Optimization Sciences. 01/2008; 29(1):1-15.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a new matrix-free method for the computation of the negative curvature direction in large scale unconstrained problems. We describe a curvilinear method which uses a combination of a quasi-Newton direc- tion and a negative curvature direction. We propose an al- gorithm for the computation of the search directions which uses information of two speciflc L-BFGS matrices in such a way that avoids both the calculation and the storage of the approximate Hessian. Explicit forms for the eigenpair that corresponds to the most negative eigenvalue of the approx- imate Hessian are also presented. Numerical results show that the proposed approach is promising.
    01/2007;
  • G. E. Manoussakis, T. N. Grapsa, C. A. Botsaris
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we present a new algorithm for finding the unconstrained minimum of a twice--continuously di#erentiable function f(x) in n variables. This algorithm is based on a conic model function, which does not involve the conjugacy matrix or the Hessian of the model function. The basic idea in this paper is to accelerate the convergence of the conic method choosing more appropriate points x 1 , x 2 , . . . , x n+1 to apply the conic model. To do this, we apply in the gradient of f a dimension--reducing method (DR), which uses reduction to proper simpler one--dimensional nonlinear equations, converges quadratically and incorporates the advantages of Newton and Nonlinear SOR algorithms. The new method has been implemented and tested in well known test functions. It converges in n + 1 iterations on conic functions and, as numerical results indicate, rapidly minimizes general functions.
    07/2004;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we present a new algorithm for finding the unconstrained minimum of a continuously dierentiable function f(x) in n variables. This algorithm is based on a conic model function, which does not involve the conjugacy matrix or the Hessian of the model function. The conic method in this paper is combined with a non-monotone line search using the Barzilai and Borwein step. The method does not guarantee descent in the objective function at each iteration. Also, the choice of step length is related to the eigenvalues of the Hessian at the minimizer and not to the function value. The use of the stopping criterion introduced by Grippo, Lampariello and Lucidi allows the objective function to increase at some iterations and still guarantees global convergence. The new algorithm converges in n + 1 iterations on conic functions and, as numerical results indicate, rapidly minimizes general functions.
    10/2002;
  • Source
    C BOTSARIS
    [Show abstract] [Hide abstract]
    ABSTRACT: An arc method is presented for solving the equality constrained nonlinear programming problem. The curvilinear search path used at each iteration of the algorithm is a second-order approximation to the geodesic of the constraint surface which emanates from the current feasible point and has the same initial heading as the projected negative gradient at that point. When the constraints are linear, or when the step length is sufficiently small, the algorithm reduces to Rosen's Gradient Projection Method.
    Journal of Mathematical Analysis and Applications 01/1981; 79(2):295-306. · 1.05 Impact Factor
  • Source
    C.A Botsaris
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper a class of algorithms is presented for minimizing a nonlinear function subject to nonlinear equality constraints along curvilinear search paths obtained by solving a linear approximation to an initial-value system of differential equations. The system of differential equations is derived by introducing a continuously differentiable matrix whose columns span the subspace tangent to the feasible region. The new approach provides a convenient way for working with the constraint set itself, rather than with the subspace tangent to it. The algorithms obtained in this paper may be viewed as curvilinear extensions of two known and successful minimization techniques. Under certain conditions, the algorithms converge to a point satisfying the first-order Kuhn-Tucker optimality conditions at a rate that is asymptotically at least quadratic.
    Journal of Mathematical Analysis and Applications 01/1981; 79(1):96–112. · 1.05 Impact Factor
  • Source
    C BOTSARIS
    [Show abstract] [Hide abstract]
    ABSTRACT: An algorithm is presented that minimizes a continuously differentiable function in several variables subject to linear inequality constraints. At each step of the algorithm an arc is generated along which a move is performed until either a point yielding a sufficient descent in the function value is determined or a constraint boundary is encountered. The decision to delite a constraint from the list of active constraints is based upon periodic estimates of the Kuhn-Tucker multipliers. The curvilinear search paths are obtained by solving a linear approximation to the differential equation of the continuous steepest descent curve for the objective function on the equality constrained region defined by the constraints which are required to remain binding. If the Hessian matrix of the objective function has certain properties and if the constraint gradients are linearly independent, the sequence generated by the algorithm converges to a point satisfying the Kuhn-Tucker optimality conditions at a rate that is at least quadratic.
    Journal of Mathematical Analysis and Applications - J MATH ANAL APPL. 01/1979; 71(2):482-515.
  • Source
    C BOTSARIS
    [Show abstract] [Hide abstract]
    ABSTRACT: An algorithm is presented that minimizes a nonlinear function in many variables under equality constraints by generating a monotonically improving sequence of feasible points along curvilinear search paths obeying an initialvalue system of differential equations. The derivation of the differential equations is based on the idea of a steepest descent curve for the objective function on the feasible region. Our method for small stepsize behaves as the generalized reduced gradient algorithm, whereas for large enough stepsize the constrained equivalent of Newton's method for unconstrained minimization is obtained.
    Journal of Mathematical Analysis and Applications 01/1979; 69(2):372-397. · 1.05 Impact Factor
  • Source
    C BOTSARIS
    [Show abstract] [Hide abstract]
    ABSTRACT: An algorithm was recently presented that minimizes a nonlinear function in several variables using a Newton-type curvilinear search path. In order to determine this curvilinear search path the eigenvalue problem of the Hessian matrix of the objective function has to be solved at each iteration of the algorithm. In this paper an iterative procedure requiring gradient information only is developed for the approximation of the eigensystem of the Hessian matrix. It is shown that for a quadratic function the approximated eigenvalues and eigenvectors tend rapidly to the actual eigenvalues and eigenvectors of its Hessian matrix. The numerical tests indicate that the resulting algorithm is very fast and stable. Moreover, the fact that some approximations to the eigenvectors of the Hessian matrix are available is used to get past saddle points and accelerate the rate of convergence on flat functions.
    Journal of Mathematical Analysis and Applications 01/1978; 63(2):396-411. · 1.05 Impact Factor
  • Source
    C BOTSARIS
    [Show abstract] [Hide abstract]
    ABSTRACT: Recently a number of algorithms have been derived which minimize a nonlinear function by iteratively constructing a monotonically improving sequence of approximate minimizers along curvilinear search paths instead of rays. These curvilinear search paths were obtained by solving a first-order approximation to certain initial value systems of nonlinear differential equations. The simplest technique for solving some of the above systems numerically, viz., that of Euler, yields either the steepest-descent or Newton method, and this induced us to examine the possibility of modifying other, more sophisticated and stable numerical integration techniques for use in function minimization. In this paper are presented some theoretical as well as practical aspects of using numerical integration techniques in order to derive minimization algorithms. Results are also given and possible areas for future research are indicated.
    Journal of Mathematical Analysis and Applications 01/1978; 63(3):729-749. · 1.05 Impact Factor
  • Source
    C BOTSARIS
    [Show abstract] [Hide abstract]
    ABSTRACT: A class of recently developed differential descent methods for function minimization is presented and discussed, and a number of algorithms are derived which minimize a quadratic function in a finite number of steps and rapidly minimize general functions. The main characteristics of our algorithms are that a more general curvilinear search path is used instead of a ray and that the eigensystem of the Hessian matrix is associated with the function minimization problem. The curvilinear search paths are obtained by solving certain initial-value systems of differential equations, which also suggest the development of modifications of known numerical integration techniques for use in function minimization. Results obtained on testing the algorithms on a number of test functions are also given and possible areas for future research indicated.
    Journal of Mathematical Analysis and Applications 01/1978; 63(1):177-198. · 1.05 Impact Factor
  • Source
    C BOTSARIS, D JACOBSON
    [Show abstract] [Hide abstract]
    ABSTRACT: A new search method is presented for unconstrained optimization. The method requires the evaluation of first and second derivatives and defines a curve along which a undimensional step takes place. For large step-size, the method performs as Newton's method, but it does not fail where the latter fails. For small step-size, the method behaves as the gradient method.
    Journal of Mathematical Analysis and Applications 01/1976; 54(1):217-229. · 1.05 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In a recent article, we introduced a method based on a conic model for unconstrained optimization. The acceleration of the convergence of this method was obtained by choosing more appropriate points in order to apply the conic model. In particular, we applied in the gradient of the objective function a dimension-reducing method for the numerical solution of a system of algebraic equations. In this work, we incorporate in the previous method the non-monotone Armijo line search, introduced by Grippo, Lampariello and Lucidi, combined with the Barzilai and Borwein steplength, in order to further accelerate the convergence. The new method does not guarantee descent in the objective function value at each iteration. Nevertheless, the use of this non-monotone line search allows the objective function to increase at some iteration without affecting the global convergence properties. The new method has been implemented and tested in well known test functions. It converges in n+1 iterations on conic functions and, as numerical results indicate, rapidly minimizes general functions.

Publication Stats

111 Citations
9.30 Total Impact Points

Institutions

  • 2010–2013
    • University of Patras
      • • Laboratory of Operations Research
      • • Department of Mathematics
      Rhion, West Greece, Greece
  • 2010–2011
    • University of Central Greece
      Lamia, Central Greece, Greece
  • 1979
    • Aristotle University of Thessaloniki
      • Division of Mathematics (MATH)
      Saloníki, Central Macedonia, Greece
  • 1978
    • Council for Scientific and Industrial Research, South Africa
      Πρετόρια/Πόλη του Ακρωτηρίου, Gauteng, South Africa
  • 1976
    • University of the Witwatersrand
      Johannesburg, Gauteng, South Africa
    • University of Johannesburg
      • Department of Applied Mathematics
      Johannesburg, Gauteng, South Africa