The perturbed proximal point algorithm and some of its applications

Applied Mathematics and Optimization (Impact Factor: 0.59). 02/1994; 29(2):125-159. DOI: 10.1007/BF01204180


Following the works of R. T. Rockafellar, to search for a zero of a maximal monotone operator, and of B. Lemaire, to solve convex optimization problems, we present a perturbed version of the proximal point algorithm. We apply this new algorithm to convex optimization and to variational inclusions or, more particularly, to variational inequalities.

28 Reads
  • Source
    • "The notion of the relatively maximal relaxed monotonicity is based on the notion of A−maximal relaxed monotonicity [1] and its variants introduced and studied in [6]–[13] and is more general than the usual maximal monotonicity , especially it could not be achieved to that context, but it seems to be application-oriented. More details on relaxed and hybrid proximal point algorithms can be found in [1]–[5], [8]–[14], [16]– [18],[21] [22]. We present a generalization to a wellcited work (in literature) of Eckstein and Bertsekas [14, Theorem 3] to the case of relatively maximal (m)−relaxed monotone mappings with some specializations, while the obtained results generalize the result of Agarwal and Verma [2]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The theory of maximal set-valued monotone mappings provide a powerful framework to the study of convex programming and variational inequalities. Based on the notion of relatively maximal relaxed monotonicity, the approximation solvability of a general class of inclusion problems is discussed, while generalizing most of investigations on weak convergence using the proximal point algorithm in a real Hilbert space setting. A well-known method of multipliers of constrained convex programming is a special case of the proximal point algorithm. The obtained results can be used to generalize the Yosida approximation, which, in turn, can be applied to generalize first-order evolution equations to the case of evolution inclusions. Furthermore, we observe that the Douglas-Rachford splitting method for finding the zero of the sum of two monotone operators is a specialization of the proximal point algorithm as well. This allows a further generalization and unification of a wide range of convex programming algorithms.
    Preview · Article · Aug 2011 · WSEAS Transactions on Mathematics
  • Source
    • "Furthermore, Rockafellar [2] applied the proximal point algorithm in convex programming. For more details, we refer the reader to [1] [2] [3] [4] [5] [6] [7] [8] [9] [10]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: A generalization to Rockafellar’s theorem (1976) in the context of approximating a solution to a general inclusion problem involving a set-valued A-maximal monotone mapping using the proximal point algorithm in a Hilbert space setting is presented. Although there exists a vast literature on this theorem, most of the studies are focused on just relaxing the proximal point algorithm and applying to the inclusion problems. The general framework for A-maximal monotonicity (also referred to as the A-monotonicity framework in literature) generalizes the general theory of set-valued maximal monotone mappings, including the H-maximal monotonicity (also referred to as H-monotonicity).
    Full-text · Article · Apr 2008 · Applied Mathematics Letters
  • Source
    • "We note that, to check the above criterion, one does not need to invert the matrix M k , as will be explained in what follows. The presented approximation rule is constructive and has advantages in some situations, when compared to the original [22] (and its variations, e.g., [32] [11] [7]), where essentially one has ε k = 0 and ∞ k=0 δ k < ∞ (in the setting of M k = I). We refer the reader to [26] [29] [28] [24] for some applications where the relative-error criterion appears useful. "
    [Show abstract] [Hide abstract]
    ABSTRACT: For the problem of solving maximal monotone inclusions, we present a rather general class of algorithms, which contains hybrid inexact proximal point methods as a special case and allows for the use of a variable metric in subproblems. The global convergence and local linear rate of convergence are established under standard assumptions. We demonstrate the advantage of variable metric implementation in the case of solving systems of smooth monotone equations by the proximal Newton method.
    Full-text · Article · Jan 2008 · SIAM Journal on Optimization
Show more