Article
The perturbed proximal point algorithm and some of its applications
Applied Mathematics and Optimization (Impact Factor: 0.59). 02/1994; 29(2):125159. DOI: 10.1007/BF01204180
ABSTRACT
Following the works of R. T. Rockafellar, to search for a zero of a maximal monotone operator, and of B. Lemaire, to solve convex optimization problems, we present a perturbed version of the proximal point algorithm. We apply this new algorithm to convex optimization and to variational inclusions or, more particularly, to variational inequalities.

 "The notion of the relatively maximal relaxed monotonicity is based on the notion of A−maximal relaxed monotonicity [1] and its variants introduced and studied in [6]–[13] and is more general than the usual maximal monotonicity , especially it could not be achieved to that context, but it seems to be applicationoriented. More details on relaxed and hybrid proximal point algorithms can be found in [1]–[5], [8]–[14], [16]– [18],[21] [22]. We present a generalization to a wellcited work (in literature) of Eckstein and Bertsekas [14, Theorem 3] to the case of relatively maximal (m)−relaxed monotone mappings with some specializations, while the obtained results generalize the result of Agarwal and Verma [2]. "
[Show abstract] [Hide abstract]
ABSTRACT: The theory of maximal setvalued monotone mappings provide a powerful framework to the study of convex programming and variational inequalities. Based on the notion of relatively maximal relaxed monotonicity, the approximation solvability of a general class of inclusion problems is discussed, while generalizing most of investigations on weak convergence using the proximal point algorithm in a real Hilbert space setting. A wellknown method of multipliers of constrained convex programming is a special case of the proximal point algorithm. The obtained results can be used to generalize the Yosida approximation, which, in turn, can be applied to generalize firstorder evolution equations to the case of evolution inclusions. Furthermore, we observe that the DouglasRachford splitting method for finding the zero of the sum of two monotone operators is a specialization of the proximal point algorithm as well. This allows a further generalization and unification of a wide range of convex programming algorithms.WSEAS Transactions on Mathematics 08/2011; 10(8):259269. 
 "Furthermore, Rockafellar [2] applied the proximal point algorithm in convex programming. For more details, we refer the reader to [1] [2] [3] [4] [5] [6] [7] [8] [9] [10]. "
[Show abstract] [Hide abstract]
ABSTRACT: A generalization to Rockafellar’s theorem (1976) in the context of approximating a solution to a general inclusion problem involving a setvalued Amaximal monotone mapping using the proximal point algorithm in a Hilbert space setting is presented. Although there exists a vast literature on this theorem, most of the studies are focused on just relaxing the proximal point algorithm and applying to the inclusion problems. The general framework for Amaximal monotonicity (also referred to as the Amonotonicity framework in literature) generalizes the general theory of setvalued maximal monotone mappings, including the Hmaximal monotonicity (also referred to as Hmonotonicity).Applied Mathematics Letters 04/2008; 21(421):355360. DOI:10.1016/j.aml.2007.05.004 · 1.34 Impact Factor 
 "We note that, to check the above criterion, one does not need to invert the matrix M k , as will be explained in what follows. The presented approximation rule is constructive and has advantages in some situations, when compared to the original [22] (and its variations, e.g., [32] [11] [7]), where essentially one has ε k = 0 and ∞ k=0 δ k < ∞ (in the setting of M k = I). We refer the reader to [26] [29] [28] [24] for some applications where the relativeerror criterion appears useful. "
[Show abstract] [Hide abstract]
ABSTRACT: For the problem of solving maximal monotone inclusions, we present a rather general class of algorithms, which contains hybrid inexact proximal point methods as a special case and allows for the use of a variable metric in subproblems. The global convergence and local linear rate of convergence are established under standard assumptions. We demonstrate the advantage of variable metric implementation in the case of solving systems of smooth monotone equations by the proximal Newton method.SIAM Journal on Optimization 01/2008; 19(1):240260. DOI:10.1137/070688146 · 1.83 Impact Factor
Similar Publications
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.