
Lennin Mallma-Ramirez- PhD
- PostDoc at Federal University of Rio de Janeiro
Lennin Mallma-Ramirez
- PhD
- PostDoc at Federal University of Rio de Janeiro
DHALA Algorithm.
About
39
Publications
3,097
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
85
Citations
Introduction
Linear and nonlinear programming, Hyperbolic augmented Lagrangian Algorithm (HALA).
Current institution
Additional affiliations
March 2018 - March 2022
Instituto Alberto Luiz Coimbra de Pós-Graduação e Pesquisa em Engenharia (COPPE)
Position
- PhD Student
Education
January 2024 - January 2024
March 2018 - March 2022
Publications
Publications (39)
In this note, we use the Dislocation Hyperbolic Augmented Lagrangian Algorithm (DHALA), also known as the Hyperbolic Multiplier Algorithm (HyMA) to solve computationally the mathematical programming with complementar-ity (or equilibrium) constraints. Furthermore, the subproblem generated by our algorithm is solved with a second-order algorithm and...
This paper deals with the Constrained Riemannian Optimization (CRO) problem, which involves minimizing a function subject to equality and inequality constraints on Riemannian manifolds. The study aims to advance optimization theory in the Riemannian setting by presenting and analyzing a penalty-type method for solving CRO problems. The proposed app...
In this note, we propose a theoretical method called Coupled Augmented Lagrangian Methods (CALM), which combines a smoothed augmented Lagrangian algorithm with a non-smoothed one. The method begins with the smoothed algorithm, and when the smoothing parameter becomes sufficiently small, the obtained values into non-smooth augmented Lagrangian algor...
In this note, we are interested in solving a convex constraint optimization problem. To solve this problem, we propose an algorithm that belongs to the class of multiplier methods studied by Kort and Bertsekas. The convergence result of this class of algorithms was theoretically assured in 1972 by Kort and Bertsekas. The main feature of our propose...
In this work, we present a nonquadratic augmented Lagrangian algorithm, called disloca-tion hyperbolic augmented Lagrangian algorithm (DHALA). This algorithm solves the nonconvex optimization problem with inequality constraints and box-constraints. Previously, this restriction of box-constraints was not considered by this algorithm. Finally, we com...
In this note, we ensure that the dislocation hyperbolic augmented Lagrangian algorithm converges to a global minimizer, we assuming nonconvexity assumptions. The subproblem generated by this algorithm is solved with the DIRECT algorithm. Finally, we present computational experiments to show the good performance of the proposed algorithm.
In this note, we comment on our work published in the RAIRO-Oper. Res.
In this note, we extend the hyperbolic augmented Lagrangian algorithm (HALA) for solving the nonconvex programming problems, that is, we guarantee that the sequence generated by HALA converges under mild assumptions to a Karush-Kuhn-Tucker (KKT) point.
In this work, we present a new augmented Lagrangian type algorithm based on the dislocation hyperbolic augmented Lagrangian function (DHALF). This algorithm is called dislocation hyperbolic augmented Lagrangian algorithm (DHALA). We ensure that DHALA converges to a global solution for the inequality constrained nonconvex optimization problem. We pr...
In work of Roman A. Polyak [3], the Modified Chen-Harker- Kanzow-Smale (CHKS) function was studied to relate a multiplier method and a Interior Prox method with the second order distance function. Independently, the dislocated hyperbolic penalty function (DHPF) was proposed by A.E. Xavier (1992). DHPF was rewritten and studied in [1] and [2]. Thus,...
In [E. G. Birgin, R. Castillo and J. M. Martínez, Computational Optimization and Applications 31, pp. 31-55, 2005], a general class of safeguarded augmented Lagrangian methods is introduced which includes a large number of different methods from the literature. Besides a numerical comparison including 65 different methods, primal-dual global conver...
In this paper, we study an augmented Lagrangian-type algorithm called the Dislocation Hyperbolic Augmented Lagrangian Algorithm (DHALA), which solves an inequality nonconvex optimization problem. We show that the sequence generated by DHALA converges to a Karush-Kuhn-Tucker (KKT) point under the Mangasarian-Fromovitz constraint qualification. The c...
We guarantee the strong duality and the existence of a saddle point of the hyperbolic augmented Lagrangian function (HALF) in convex optimization. To guarantee these results, we assume a set of convexity hypothesis and the Slater condition. Finally, we computationally illustrate our theoretical results obtained in this work.
The dislocation hyperbolic augmented Lagrangian algorithm (DHALA) is a new approach to the hyperbolic augmented Lagrangian algorithm (HALA). DHALA is designed to solve convex nonlinear programming problems. We guarantee that the sequence generated by DHALA converges towards a Karush-Kuhn-Tucker point. We are going to observe that DHALA has a slight...
A new kernel function is introduced in Mathematical Optimization. This function is called the dislocation hyperbolic kernel function. It is based on the dislocation hyperbolic function. Finally we present some applications of this new function.
RESUMO O Algoritmo Lagrangiano Aumentado Hiperbólico (HALA)é um novo algoritmo recen-temente proposto na literatura, para resolver o problema de programação não linear convexa restrita. Apresentamos o resultado da convergência e algumas propriedades deste algoritmo. Neste trabalho, mostramos principalmente vários experimentos computacionais usando...
The dislocation hyperbolic augmented Lagrangian algorithm (DHALA) solves the nonconvex programming problem considering an update rule for its penalty parameter and considering a condition to ensure the complementarity condition. in this work, we ensure that the sequence generated by DHALA converges to a Karush-KuhnTucker (KKT) point, and we present...
https://impa.br/wp-content/uploads/2023/07/Lennin-Mallma-RAMIREZ.pdf
https://impa.br/wp-content/uploads/2023/07/Alexandre-Belfort.pdf
In this work we study a new algorithm recently proposed in [7], to solve the convex optimization problem. This algorithm is called as the Hyperbolic Augmented Lagrangian Algorithm (HALA). In this work we present several computational experiments, to see the performance of this new algorithm .
In this work, we present an approach to guarantee that the hyperbolic augmented Lagrangian function (HALF) has local saddle points. This result is obtained under the second-order sufficient condition.
Abstract: We guarantee the strong duality and the existence of a saddle point of the hyperbolic augmented Lagrangian function (HALF) in convex optimization. In order to guarantee these results, we assume a set of convexity hypothesis and the Slater condition. Finally we computationally illustrate our theoretical results obtained in this work.
In this note, some results are introduced considering the assumptions of quasiconvexity and nonmonotonicity, finally an application and an idea to solve the quasiconvex equilibrium problem are presented considering these new results. Resumo Nesta nota, alguns resultados são introduzidos considerando as suposições de quase-convexidade e não-monotoni...
In this paper, we propose an inexact proximal point method to solve equilibrium problems using proximal distances and the diagonal subdifferential. Under some natural assumptions on the problem and the quasimonotonicity condition on the bifunction, we prove that the sequence generated by the method converges to a solution point of the problem.
In this paper we propose an inexact proximal point method to solve equilibrium problem using proximal distances and the diagonal subdifferential. Under some natural assumptions on the problem and the quasimonotonicity condition on the bifunction, we prove that the sequence generated for the method converges to a solution point of the problem.
In this paper we introduce an inexact proximal point algorithm using proximal distances for solving variational inequality problems when the mapping is pseudomonotone or quasimono-tone. Under some natural assumptions we prove that the sequence generates by the algorithm is convergent for the pseudomonotone case and weakly convergent for the quasimo...
In this paper we propose an inexact proximal point method to solve constrained minimization problems with locally Lipschitz quasiconvex objective functions. Assuming that the function is also bounded from below, lower semicontinuous and using proximal distances, we show that the sequence generated for the method converges to a stationary point of t...