Radu Ioan Boţ

Radu Ioan Boţ
Verified
Radu verified their affiliation via an institutional email.
Verified
Radu verified their affiliation via an institutional email.
  • Univ.-Prof. Dr.
  • Professor (Full) at University of Vienna

About

282
Publications
39,446
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
6,403
Citations
Current institution
University of Vienna
Current position
  • Professor (Full)
Additional affiliations
July 2013 - September 2017
University of Vienna
Position
  • Professor (Associate)
April 2011 - June 2013
Chemnitz University of Technology
Position
  • PostDoc Position
October 2010 - March 2011
Heinrich Heine University Düsseldorf
Position
  • Professor (Full)

Publications

Publications (282)
Preprint
Full-text available
In a Hilbert space $H$, in order to develop fast optimization methods, we analyze the asymptotic behavior, as time $t$ tends to infinity, of inertial continuous dynamics where the damping acts as a closed-loop control. The function $f: H \to R$ to be minimized (not necessarily convex) enters the dynamic through it gradient, which is assumed to be L...
Article
Full-text available
Recently, there has been a great interest in analysing dynamical flows, where the stationary limit is the minimiser of a convex energy. Particular flows of great interest have been continuous limits of Nesterov’s algorithm and the fast iterative shrinkage-thresholding algorithm, respectively. In this paper, we approach the solutions of linear ill-p...
Preprint
Full-text available
This work aims to minimize a continuously differentiable convex function with Lipschitz continuous gradient under linear equality constraints. The proposed inertial algorithm results from the discretization of the second-order primal-dual dynamical system with asymptotically vanishing damping term considered by Bo\c t and Nguyen in [Bot, Nguyen, JD...
Article
In this paper, we consider a broad class of nonsmooth and nonconvex fractional programs, which encompass many important modern optimization problems arising from diverse areas such as the recently proposed scale-invariant sparse signal reconstruction problem in signal processing. We propose a proximal subgradient algorithm with extrapolations for s...
Article
Full-text available
In this work, we approach the minimization of a continuously differentiable convex function under linear equality constraints by a second-order dynamical system with asymptotically vanishing damping term. The system is formulated in terms of the augmented Lagrangian associated to the minimization problem. We show fast convergence of the primal-dual...
Preprint
Full-text available
We analyze fast diagonal methods for simple bilevel programs. Guided by the analysis of the corresponding continuous-time dynamics, we provide a unified convergence analysis under general geometric conditions, including H\"olderian growth and the Attouch-Czarnecki condition. Our results yield explicit convergence rates and guarantee weak convergenc...
Preprint
Full-text available
In this paper, we study a class of nonconvex and nonsmooth structured difference-of-convex (DC) programs, which contain in the convex part the sum of a nonsmooth linearly composed convex function and a differentiable function, and in the concave part another nonsmooth linearly composed convex function. Among the various areas in which such problems...
Preprint
Full-text available
It is known that if a twice differentiable function has a Lipschitz continuous Hessian, then its gradients satisfy a Jensen-type inequality. In particular, this inequality is Hessian-free in the sense that the Hessian does not actually appear in the inequality. In this paper, we show that the converse holds in a generalized setting: if a continuos...
Preprint
Full-text available
In a real Hilbert space, we consider two classical problems: the global minimization of a smooth and convex function $f$ (i.e., a convex optimization problem) and finding the zeros of a monotone and continuous operator $V$ (i.e., a monotone equation). Attached to the optimization problem, first we study the asymptotic properties of the trajectories...
Preprint
Full-text available
In this paper, we derive a Fast Reflected Forward-Backward (Fast RFB) algorithm to solve the problem of finding a zero of the sum of a maximally monotone operator and a monotone and Lipschitz continuous operator in a real Hilbert space. Our approach extends the class of reflected forward-backward methods by introducing a Nesterov momentum term and...
Preprint
Full-text available
In this paper we introduce, in a Hilbert space setting, a second order dynamical system with asymptotically vanishing damping and vanishing Tikhonov regularization that approaches a multiobjective optimization problem with convex and differentiable components of the objective function. Trajectory solutions are shown to exist in finite dimensions. W...
Preprint
Full-text available
We study accelerated Krasnosel'ki\u{\i}-Mann-type methods with preconditioners in both continuous and discrete time. From a continuous time model, we derive a generalized fast Krasnosel'ki\u{\i}-Mann method, providing a new yet simple proof of convergence that allows for unprecedented flexibility in parameter tuning. Our analysis unifies inertial a...
Preprint
Full-text available
In this paper, we introduce a novel Extra-Gradient method with anchor term governed by general parameters. Our method is derived from an explicit discretization of a Tikhonov-regularized monotone flow in Hilbert space, which provides a theoretical foundation for analyzing its convergence properties. We establish strong convergence to specific point...
Article
In a Hilbert setting, we develop a gradient-based dynamic approach for fast solving convex optimization problems. By applying time scaling, averaging, and perturbation techniques to the continuous steepest descent (SD), we obtain high-resolution ordinary differential equations of the Nesterov and Ravine methods. These dynamics involve asymptoticall...
Preprint
Full-text available
In this work, we approach the problem of finding the zeros of a continuous and monotone operator through a second-order dynamical system with a damping term of the form $1/t^{r}$, where $r\in [0, 1]$. The system features the time derivative of the operator evaluated along the trajectory, which is a Hessian-driven type damping term when the governin...
Preprint
Full-text available
In the framework of real Hilbert spaces, we investigate first-order dynamical systems governed by monotone and continuous operators. It has been established that for these systems, only the ergodic trajectory converges to a zero of the operator. A notable example is the counterclockwise π{2-rotation operator on R 2 , which illustrates that general...
Preprint
Full-text available
In the framework of real Hilbert spaces, we investigate first-order dynamical systems governed by monotone and continuous operators. It has been established that for these systems, only the ergodic trajectory converges to a zero of the operator. A notable example is the counterclockwise $\pi/2$-rotation operator on $\mathbb{R}^2$, which illustrates...
Preprint
Full-text available
In our pursuit of finding a zero for a monotone and Lipschitz continuous operator M : R n → R n amidst noisy evaluations, we explore an associated differential equation within a stochastic framework, incorporating a correction term. We present a result establishing the existence and uniqueness of solutions for the stochastic differential equations...
Preprint
Full-text available
In this paper, we consider a class of nonconvex and nonsmooth fractional programming problems, which involve the sum of a convex, possibly nonsmooth function composed with a linear operator and a differentiable, possibly nonconvex function in the numerator and a convex, possibly nonsmooth function composed with a linear operator in the denominator....
Preprint
Full-text available
We address the problem of finding the zeros of the sum of a maximally monotone operator and a cocoercive operator. Our approach introduces a modification to the forward-backward method by integrating an inertial/momentum term alongside a correction term. We demonstrate that the sequence of iterations thus generated converges weakly towards a soluti...
Article
Full-text available
In the framework of real Hilbert spaces, we study continuous in time dynamics as well as numerical algorithms for the problem of approaching the set of zeros of a single-valued monotone and continuous operator V . The starting point of our investigations is a second-order dynamical system that combines a vanishing damping term with the time derivat...
Article
Minimax problems of the form $min_x max_y Ψ(x, y)$ have attracted increased interest largely due to advances in machine learning, in particular generative adversarial networks and adversarial learning. These are typically trained using variants of stochastic gradient descent for the two players. Although convex-concave problems are well understood...
Preprint
We study monotone variational inequalities that can arise as optimality conditions for constrained convex optimisation or convex-concave minimax problems and propose a novel algorithm that uses only one gradient/operator evaluation and one projection onto the constraint set per iteration. The algorithm, which we call fOGDA-VI, achieves a $o \left(...
Preprint
Full-text available
The recovery of a signal from the magnitudes of its transformation, like the Fourier transform, is known as the phase retrieval problem and is of big relevance in various fields of engineering and applied physics. In this paper, we present a fast inertial/momentum based algorithm for the phase retrieval problem and we prove a convergence guarantee...
Conference Paper
Full-text available
We study monotone variational inequalities that can arise as optimality conditions for constrained convex optimisation or convex-concave minimax problems and propose a novel algorithm that uses only one gradient/operator evaluation and one projection onto the constraint set per iteration. The algorithm, which we call fOGDA-VI, achieves a o(1 /k) ra...
Preprint
Full-text available
In a Hilbert space H, we study the convergence properties of the trajectories of a Newton-like inertial dynamical system with a Tikhonov regularization term governed by a general maximally monotone operator A : H → 2 H. The maximally monotone operator enters the dynamics via its Yosida approximation with an appropriate adjustment of the Yosida regu...
Preprint
Full-text available
In a Hilbert setting, for convex differentiable optimization, we develop a general framework for adaptive accelerated gradient methods. They are based on damped inertial dynamics where the coefficients are designed in a closed-loop way. Specifically, the damping is a feedback control of the velocity, or of the gradient of the objective function. Fo...
Article
Full-text available
We introduce a relaxed inertial forward-backward-forward (RIFBF) splitting algorithm for approaching the set of zeros of the sum of a maximally monotone operator and a single-valued monotone and Lipschitz continuous operator. This work aims to extend Tseng's forward-backward-forward method by both using inertial effects as well as relaxation parame...
Article
The recovery of a signal from the magnitudes of its transformation, like the Fourier transform, is known as the phase retrieval problem and is of big relevance in various fields of engineering and applied physics. In this paper, we present a fast inertial/momentum based algorithm for the phase retrieval problem. Our method can be seen as an extende...
Article
Full-text available
In a Hilbert setting, we study the convergence properties of the second order in time dynamical system combining viscous and Hessian-driven damping with time scaling in relation to the minimization of a nonsmooth and convex function. The system is formulated in terms of the gradient of the Moreau envelope of the objective function with a time-depen...
Article
Full-text available
In this work, we study resolvent splitting algorithms for solving composite monotone inclusion problems. The objective of these general problems is finding a zero in the sum of maximally monotone operators composed with linear operators. Our main contribution is establishing the first primal-dual splitting algorithm for composite monotone inclusion...
Preprint
Full-text available
In a Hilbert setting, we develop a gradient-based dynamic approach for fast solving convex optimization problems. By applying time scaling, averaging, and perturbation techniques to the continuous steepest descent (SD), we obtain high-resolution ODEs of the Nesterov and Ravine methods. These dynamics involve asymptotically vanishing viscous damping...
Article
Full-text available
This work aims to minimize a continuously differentiable convex function with Lipschitz continuous gradient under linear equality constraints. The proposed inertial algorithm results from the discretization of the second-order primal-dual dynamical system with asymptotically vanishing damping term addressed by Boţ and Nguyen (J. Differential Equati...
Preprint
Full-text available
The Krasnosel'skii-Mann (KM) algorithm is the most fundamental iterative scheme designed to find a fixed point of an averaged operator in the framework of a real Hilbert space, since it lies at the heart of various numerical algorithms for solving monotone inclusions and convex optimization problems. We enhance the Krasnosel'skii-Mann algorithm wit...
Article
Full-text available
In this work we aim to solve a convex-concave saddle point problem, where the convex-concave coupling function is smooth in one variable and nonsmooth in the other and not assumed to be linear in either. The problem is augmented by a nonsmooth regulariser in the smooth component. We propose and investigate a novel algorithm under the name of OGAPro...
Article
Full-text available
In the framework of a real Hilbert space, we address the problem of finding the zeros of the sum of a maximally monotone operator A and a cocoercive operator B . We study the asymptotic behaviour of the trajectories generated by a second order equation with vanishing damping, attached to this problem, and governed by a time-dependent forward–backwa...
Preprint
Full-text available
In the framework of real Hilbert spaces we study continuous in time dynamics as well as numerical algorithms for the problem of approaching the set of zeros of a single-valued monotone and continuous operator $V$. The starting point of our investigations is a second order dynamical system that combines a vanishing damping term with the time derivat...
Article
In a real Hilbert space \mathcal{H} , in order to develop fast optimization methods, we analyze the asymptotic behavior, as time t tends to infinity, of a large class of autonomous dissipative inertial continuous dynamics. The function f: \mathcal{H} \to \mathbb{R} to be minimized (not necessarily convex) enters the dynamic via its gradient, which...
Preprint
Full-text available
In a Hilbert setting we study the convergence properties of a second order in time dynamical system combining viscous and Hessian-driven damping with time scaling in relation with the minimization of a nonsmooth and convex function. The system is formulated in terms of the gradient of the Moreau envelope of the objective function with time-dependen...
Preprint
Full-text available
In this work, we study resolvent splitting algorithms for solving composite monotone inclusion problems. The objective of these general problems is finding a zero in the sum of maximally monotone operators composed with linear operators. Our main contribution is establishing the first primal-dual splitting algorithm for composite monotone inclusion...
Preprint
Full-text available
In the framework of a real Hilbert space, we address the problem of finding the zeros of the sum of a maximally monotone operator $A$ and a cocoercive operator $B$. We study the asymptotic behaviour of the trajectories generated by a second order equation with vanishing damping, attached to this problem, and governed by a time-dependent forward-bac...
Preprint
Full-text available
In this work, we approach the minimization of a continuously differentiable convex function under linear equality constraints by a second-order dynamical system with asymptot-ically vanishing damping term. The system is formulated in terms of the augmented La-grangian associated to the minimization problem. We show fast convergence of the primal-du...
Preprint
Full-text available
In this work, we approach the minimization of a continuously differentiable convex function under linear equality constraints by a second-order dynamical system with asymptotically vanishing damping term. The system is formulated in terms of the augmented Lagrangian associated to the minimization problem. We show fast convergence of the primal-dual...
Article
Full-text available
We aim to factorize a completely positive matrix by using an optimization approach which consists in the minimization of a nonconvex smooth function over a convex and compact set. To solve this problem we propose a projected gradient algorithm with parameters that take into account the effects of relaxation and inertia. Both projection and gradient...
Preprint
Full-text available
In this work we aim to solve a convex-concave saddle point problem, where the convex-concave coupling function is smooth in one variable and nonsmooth in the other and not assumed to be linear in either. The problem is augmented by a nonsmooth regulariser in the smooth component. We propose and investigate a novel algorithm under the name of OGAPro...
Article
In this paper we aim to minimize the sum of two nonsmooth (possibly also nonconvex) functions in separate variables connected by a smooth coupling function. To tackle this problem we choose a continuous forward-backward approach and introduce a dynamical system which is formulated by means of the partial gradients of the smooth coupling function an...
Article
In this article, we propose a Krasnosel’skiǐ-Mann-type algorithm for finding a common fixed point of a countably infinite family of nonexpansive operators (Tn)n≥0 in Hilbert spaces. We formulate an asymptotic property which the family (Tn)n≥0 has to fulfill such that the sequence generated by the algorithm converges strongly to the element in ⋂n≥0F...
Article
Full-text available
We first point out several flaws in the recent paper [R. Shefi, M. Teboulle: Rate of convergence analysis of decomposition methods based on the proximal method of multipliers for convex minimization, SIAM J. Optim. 24, 269--297, 2014] that proposes two ADMM-type algorithms for solving convex optimization problems involving compositions with linear...
Article
Full-text available
In this paper we propose a primal-dual dynamical approach to the minimization of a structured convex function consisting of a smooth term, a nonsmooth term, and the composition of another nonsmooth term with a linear continuous operator. In this scope we introduce a dynamical system for which we prove that its trajectories asymptotically converge t...
Preprint
Full-text available
In this paper, we consider a class of nonsmooth and nonconvex sum-of-ratios fractional optimization problems with block structure. This model class encompasses a broad spectrum of nonsmooth optimization problems such as the energy efficiency maximization problem and the sparse generalized eigenvalue problem. We first show that these problems can be...
Article
Full-text available
We aim to solve a structured convex optimization problem, where a nonsmooth function is composed with a linear operator. When opting for full splitting schemes, usually, primal–dual type methods are employed as they are effective and also well studied. However, under the additional assumption of Lipschitz continuity of the nonsmooth function which...
Article
Full-text available
In this work we investigate dynamical systems designed to approach the solution sets of inclusion problems involving the sum of two maximally monotone operators. Our aim is to design methods which guarantee strong convergence of trajectories towards the minimum norm solution of the underlying monotone inclusion problem. To that end, we investigate...
Preprint
Full-text available
Minimax problems have attracted increased interest largely due to advances in machine learning, in particular generative adversarial networks. These are trained using variants of stochastic gradient descent for the two players. Although convex-concave problems are well understood with many efficient solution methods to choose from, theoretical guar...
Preprint
Full-text available
Motivated by the training of Generative Adversarial Networks (GANs), we study methods for solving minimax problems with additional nonsmooth regularizers. We do so by employing \emph{monotone operator} theory, in particular the \emph{Forward-Backward-Forward (FBF)} method, which avoids the known issue of limit cycling by correcting each update by a...
Preprint
Full-text available
We investigate the asymptotic properties of the trajectories generated by a second-order dynamical system with Hessian driven damping and a Tikhonov regularization term in connection with the minimization of a smooth convex function in Hilbert spaces. We obtain fast convergence results for the function values along the trajectories. The Tikhonov re...
Article
Full-text available
We investigate the asymptotic properties of the trajectories generated by a second-order dynamical system with Hessian driven damping and a Tikhonov regularization term in connection with the minimization of a smooth convex function in Hilbert spaces. We obtain fast convergence results for the function values along the trajectories. The Tikhonov re...
Article
Tseng’s forward-backward-forward algorithm is a valuable alternative for Korpelevich’s extragradient method when solving variational inequalities over a convex and closed set governed by monotone and Lipschitz continuous operators, as it requires in every step only one projection operation. However, it is well-known that Korpelevich’s method conver...
Preprint
Full-text available
We introduce a relaxed inertial forward-backward-forward (RIFBF) splitting algorithm for approaching the set of zeros of the sum of a maximally monotone operator and a single-valued monotone and Lipschitz continuous operator. This work aims to extend Tseng's forward-backward-forward method by both using inertial effects as well as relaxation parame...
Preprint
Full-text available
In this paper, we consider a broad class of nonsmooth and nonconvex fractional program where the numerator can be written as the sum of a continuously differentiable convex function whose gradient is Lipschitz continuous and a proper lower semicontinuous (possibly) nonconvex function, and the denominator is weakly convex over the constraint set. Th...
Preprint
Full-text available
In this paper we aim to minimize the sum of two nonsmooth (possibly also nonconvex) functions in separate variables connected by a smooth coupling function. To tackle this problem we chose a continuous forward-backward approach and introduce a dynamical system which is formulated by means of the partial gradients of the smooth coupling function and...
Article
Full-text available
We develop a new stochastic algorithm for solving pseudomonotone stochastic variational inequalities. Our method builds on Tseng’s forward-backward-forward algorithm, which is known in the deterministic literature to be a valuable alternative to Korpelevich’s extragradient method when solving variational inequalities over a convex and closed set go...
Preprint
Full-text available
In this article, we propose a Krasnosel'ski\v{\i}-Mann-type algorithm for finding a common fixed point of a countably infinite family of nonexpansive operators $(T_n)_{n \geq 0}$ in Hilbert spaces. We formulate an asymptotic property which the family $(T_n)_{n \geq 0}$ has to fulfill such that the sequence generated by the algorithm converges stron...
Preprint
Full-text available
In this work we investigate dynamical systems designed to approach the solution sets of inclusion problems involving the sum of two maximally monotone operators. Our aim is to design methods which guarantee strong convergence of trajectories towards the minimum norm solution of the underlying monotone inclusion problem. To that end, we investigate...
Chapter
Full-text available
We propose an iterative scheme for solving variational inequalities with monotone operators over affine sets in an infinite dimensional Hilbert space setting. We show that several primal-dual algorithms in the literature as well as the classical ADMM algorithm for convex optimization problems, together with some of its variants, are encompassed by...
Article
Full-text available
The Alternating Minimization Algorithm has been proposed by Paul Tseng to solve convex programming problems with two-block separable linear constraints and objectives, whereby (at least) one of the components of the latter is assumed to be strongly convex. The fact that one of the subproblems to be solved within the iteration process of this method...
Preprint
Full-text available
In this paper we propose a primal-dual dynamical approach to the minimization of a structured convex function consisting of a smooth term, a nonsmooth term, and the composition of another nonsmooth term with a linear continuous operator. In this scope we introduce a dynamical system for which we prove that its trajectories asymptotically converge t...
Preprint
Full-text available
In this paper we propose a primal-dual dynamical approach to the minimization of a structured convex function consisting of a smooth term, a nonsmooth term, and the composition of another nonsmooth term with a linear continuous operator. In this scope we introduce a dynamical system for which we prove that its trajectories asymptotically converge t...
Preprint
Full-text available
We aim to solve a structured convex optimization problem, where a nonsmooth function is composed with a linear operator. When opting for full splitting schemes, usually, primal-dual type methods are employed as they are effective and also well studied. However, under the additional assumption of Lipschitz continuity of the nonsmooth function which...
Preprint
Full-text available
We develop a new stochastic algorithm with variance reduction for solving pseudo-monotone stochastic variational inequalities. Our method builds on Tseng's forward-backward-forward (FBF) algorithm, which is known in the deterministic literature to be a valuable alternative to Korpelevich's extragradient method when solving variational inequalities...
Conference Paper
Full-text available
We develop a new stochastic algorithm with variance reduction for solving pseudo-monotone stochastic variational inequalities. Our method builds on Tseng's forward-backward-forward algorithm, which is known in the deterministic literature to be a valuable alternative to Korpelevich's extragradient method when solving variational inequalities over a...
Preprint
Full-text available
We develop a new stochastic algorithm with variance reduction for solving pseudo-monotone stochastic variational inequalities. Our method builds on Tseng's forward-backward-forward (FBF) algorithm, which is known in the deterministic literature to be a valuable alternative to Korpelevich's extragradient method when solving variational inequalities...
Article
Full-text available
We propose in this paper a unifying scheme for several algorithms from the literature dedicated to the solving of monotone inclusion problems involving compositions with linear continuous operators in infinite dimensional Hilbert spaces. We show that a number of primal-dual algorithms for monotone inclusions and also the classical ADMM numerical sc...
Chapter
Full-text available
We consider the dynamical system (forumala Presented). where ϕ: ℝ ⁿ → ℝ∪ { + ∞} is a proper, convex, and lower semicontinuous function, ψ: ℝ ⁿ → ℝ is a (possibly nonconvex) smooth function, and λ > 0 is a parameter which controls the velocity. We show that the set of limit points of the trajectory x is contained in the set of critical points of the...
Article
Full-text available
We propose a proximal algorithm for minimizing objective functions consisting of three summands: the composition of a nonsmooth function with a linear operator, another nonsmooth function (with each of the nonsmooth summands depending on an independent block variable), and a smooth function which couples the two block variables. The algorithm is a...
Preprint
Full-text available
Recently, there has been a great interest in analysing dynamical flows, where the stationary limit is the minimiser of a convex energy. Particular flows of great interest have been continuous limits of Nesterov's algorithm and the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA), respectively. In this paper we approach the solutions of linea...
Article
Full-text available
We investigate a forward–backward splitting algorithm of penalty type with inertial effects for finding the zeros of the sum of a maximally monotone operator and a cocoercive one and the convex normal cone to the set of zeroes of an another cocoercive operator. Weak ergodic convergence is obtained for the iterates, provided that a condition express...
Article
Full-text available
We investigate the asymptotic properties of the trajectories generated by a second-order dynamical system of proximal-gradient type stated in connection with the minimization of the sum of a nonsmooth convex and a (possibly nonconvex) smooth function. The convergence of the generated trajectory to a critical point of the objective is ensured provid...
Preprint
Full-text available
Tseng's forward-backward-forward algorithm is a valuable alternative for Korpelevich's extragradient method when solving variational inequalities over a convex and closed set governed by monotone and Lipschitz continuous operators, as it requires in every step only one projection operation. However, it is well-known that Korpelevich's method is pro...
Article
Full-text available
We investigate the convergence properties of incremental mirror descent type subgradient algorithms for minimizing the sum of convex functions. In each step, we only evaluate the subgradient of a single component function and mirror it back to the feasible domain, which makes iterations very cheap to compute. The analysis is made for a randomized s...
Article
Full-text available
We investigate a second order dynamical system with variable damping in connection with the minimization of a nonconvex differentiable function. The dynamical system is formulated in the spirit of the differential equation which models Nesterov’s accelerated convex gradient method. We show that the generated trajectory converges to a critical point...
Article
Full-text available
In this paper we carry out an asymptotic analysis of the proximal-gradient dynamical system ẋ(t)+x(t)=proxγfx(t)−γ∇Φ(x(t))−ax(t)−by(t),ẏ(t)+ax(t)+by(t)=0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddside...
Preprint
Full-text available
The Alternating Minimization Algorithm (AMA) has been proposed by Tseng to solve convex programming problems with two-block separable linear constraints and objectives, whereby (at least) one of the components of the latter is assumed to be strongly convex. The fact that one of the subproblems to be solved within the iteration process of AMA does n...
Preprint
Full-text available
We propose a proximal algorithm for minimizing objective functions consisting of three summands: the composition of a nonsmooth function with a linear operator, another nonsmooth function, each of the nonsmooth summands depending on an independent block variable, and a smooth function which couples the two block variables. The algorithm is a full s...
Article
Full-text available
We propose two forward–backward proximal point type algorithms with inertial/memory effects for determining weakly efficient solutions to a vector optimization problem consisting in vector-minimizing with respect to a given closed convex pointed cone the sum of a proper cone-convex vector function with a cone-convex differentiable one, both mapping...
Article
Full-text available
We propose two numerical algorithms for minimizing the sum of a smooth function and the composition of a nonsmooth function with a linear operator in the fully nonconvex setting. The iterative schemes are formulated in the spirit of the proximal and, respectively, proximal linearized alternating direction method of multipliers. The proximal terms a...
Preprint
We propose two numerical algorithms in the fully nonconvex setting for the minimization of the sum of a smooth function and the composition of a nonsmooth function with a linear operator. The iterative schemes are formulated in the spirit of the proximal alternating direction method of multipliers and its linearized variant, respectively. The proxi...
Article
Full-text available
We consider the problem of minimizing a smooth convex objective function subject to the set of minima of another differentiable convex function. In order to solve this problem, we propose an algorithm which combines the gradient method with a penalization technique. Moreover, we insert in our algorithm an inertial term, which is able to take advant...
Preprint
We investigate the asymptotic properties of the trajectories generated by a second-order dynamical system of proximal-gradient type stated in connection with the minimization of the sum of a nonsmooth convex and a (possibly nonconvex) smooth function. The convergence of the generated trajectory to a critical point of the objective is ensured provid...
Article
Full-text available
In connection with the optimization problem infx∈argminΨ{Φ(x)+Θ(x)},\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\underset{x\in\text{argmin}\Psi}{\inf}\{\Phi(x)+\The...
Article
Full-text available
We propose a proximal-gradient algorithm with penalization terms and inertial and memory effects for minimizing the sum of a proper, convex, and lower semicontinuous and a convex differentiable function subject to the set of minimizers of another convex differentiable function. We show that, under suitable choices for the step sizes and the penaliz...
Preprint
We propose in this paper a unifying scheme for several algorithms from the literature dedicated to the solving of monotone inclusion problems involving compositions with linear continuous operators in infinite dimensional Hilbert spaces. We show that a number of primal-dual algorithms for monotone inclusions and also the classical ADMM numerical sc...
Article
Full-text available
We consider the dynamical system \begin{equation*}\left\{ \begin{array}{ll} v(t)\in\partial\phi(x(t))\\ \lambda\dot x(t) + \dot v(t) + v(t) + \nabla \psi(x(t))=0, \end{array}\right.\end{equation*} where $\phi:\R^n\to\R\cup\{+\infty\}$ is a proper, convex and lower semicontinuous function, $\psi:\R^n\to\R$ is a (possibly nonconvex) smooth function a...
Article
Full-text available
In this paper we investigate in a Hilbert space setting a second order dynamical system of the form $$\ddot{x}(t)+\g(t)\dot{x}(t)+x(t)-J_{\lambda(t) A}\big(x(t)-\lambda(t) D(x(t))-\lambda(t)\beta(t)B(x(t))\big)=0,$$ where $A:{\mathcal H}\toto{\mathcal H}$ is a maximal monotone operator, $J_{\lambda(t) A}:{\mathcal H}\To{\mathcal H}$ is the resolven...

Network

Cited By