Figure - available from: Computational Optimization and Applications
This content is subject to copyright. Terms and conditions apply.
Source publication
We consider monotone inclusions defined on a Hilbert space where the operator is given by the sum of a maximal monotone operator T and a single-valued monotone, Lipschitz continuous, and expectation-valued operator V. We draw motivation from the seminal work by Attouch and Cabot (Attouch in AMO 80:547–598, 2019, Attouch in MP 184: 243–287) on relax...
Similar publications
In this paper, we present a stochastic gradient algorithm for minimizing a smooth objective function that is an expectation over noisy cost samples and only the latter are observed for any given parameter. Our algorithm employs a gradient estimation scheme with random perturbations, which are formed using the truncated Cauchy distribution from the...
Citations
... is assumed to be strongly monotone. Hence, we can solve the VI to ε-accuracy with exponential rate using for instance the method in [14]. ...
We consider an N-player hierarchical game in which the ith player’s objective comprises of an expectation-valued term, parametrized by rival decisions, and a hierarchical term. Such a framework allows for capturing a broad range of stochastic hierarchical optimization problems, Stackelberg equilibrium problems, and leader-follower games. We develop an iteratively regularized and smoothed variance-reduced modified extragradient framework for iteratively approaching hierarchical equilibria in a stochastic setting. We equip our analysis with rate statements, complexity guarantees, and almost-sure convergence results. We then extend these statements to settings where the lower-level problem is solved inexactly and provide the corresponding rate and complexity statements. Our model framework encompasses many game theoretic equilibrium problems studied in the context of power markets. We present a realistic application to the study of virtual power plants, emphasizing the role of hierarchical decision making and regularization. Preliminary numerics suggest that empirical behavior compares well with theoretical guarantees.
... In addition, [41] designed a proximal decomposition algorithm where a regularized subgame is inexactly solved at each iteration. Furthermore, nonsmoothness in player problems leads to monotone stochastic inclusions, a class of problems that has been recently addressed by [11] by relaxed inertial forward-backward-forward (FBF) splitting methods. They proved that the expectation-valued gap function at the time-averaged sequence diminishes with a sublinear rate O(1/k), and the oracle complexity for obtaining a suitably defined -solution is O(1/ 2+a ) with a > 0. Lipschitzian assumptions were weakened by [47] by leveraging smoothing while relaxation of the monotonicity requirement was considered by [24]. ...
... In the consensus step (10), each player i ∈ N merely uses its neighboring information to update an intermediate estimatev i,k . Then each player i updates its equilibrium strategy by the proximal stochastic gradient method overlaid by a Tikhonov regularization (11), in which an estimate of the aggregate Nv i,k rather than the true value σ(x k ) is used to evaluate the stochastic gradient. Specially, each player i ∈ N may independently choose the steplengths {α i,k } and the regularization parameters {η i,k }. ...
... Then (11) can be rewritten as ...
We consider a class of N-player nonsmooth aggregative games over networks in stochastic regimes, where the ith player is characterized by a composite cost function , is a smooth expectation-valued function dependent on its own strategy and an aggregate function of rival strategies, is a convex hierarchical term dependent on its strategy, and is a nonsmooth convex function of its strategy with an efficient prox-evaluation. We design a fully distributed iterative proximal stochastic gradient method overlaid by a Tikhonov regularization, where each player may independently choose its steplengths and regularization parameters while meeting some coordination requirements. Under a monotonicity assumption on the concatenated player-specific gradient mapping, we prove that the generated sequence converges almost surely to the least-norm Nash equilibrium. In addition, we establish the convergence rate associated with the expected gap function at the time-averaged sequence. Furthermore, we consider the extension to the private hierarchical regime where each player is a leader with respect to a collection of private followers competing in a strongly monotone game, parametrized by leader decisions. By leveraging a convolution-smoothing framework, we present amongst the first fully distributed schemes for computing a Nash equilibrium of a game complicated by such a hierarchical structure. Based on this framework, we extend the rate statements to accommodate the computation of a hierarchical stochastic Nash equilibrium by using a Fitzpatrick gap function. Notably, both sets of fully distributed schemes display near-optimal sample-complexities, suggesting that this hierarchical structure does not lead to performance degradation. Finally, we validate the proposed methods on a networked Nash-Cournot equilibrium problem and a hierarchical generalization.
... Inspired by [24], [25], we design next a fully-distributed relaxed-inertial preconditioned forward-backward-forward (RIpFBF) algorithm to compute a v-GNE of the GNEP (6) by exploiting the splitting T = A + B. Specifically, we rely on the following result: ...
... Proof of Theorem 2: We take inspiration from [25], [27] to show that the claim holds true. In particular, we first derive the fundamental recursion between successive iterates of Algorithm 1, and then show the recursion enjoys a Lyapunov-like decrease ensuring convergence. ...
We consider generalized Nash equilibrium problems (GNEPs) with linear coupling constraints affected by both local (i.e., agent-wise) and global (i.e., shared resources) disturbances taking values in polyhedral uncertainty sets. By making use of traditional tools borrowed from robust optimization, for this class of problems we derive a tractable, finite-dimensional reformulation leading to a deterministic ''extended game'', and we show that this latter still amounts to a GNEP featuring generalized Nash equilibria ''in the worst-case''. We then design a fully-distributed, accelerated algorithm based on monotone operator theory, which enjoys convergence towards a Nash equilibrium of the original, uncertain game under weak structural assumptions. Finally, we illustrate the effectiveness of the proposed distributed scheme through numerical simulations
... Second, the mapping φ i (x i , ·) is assumed to be strongly monotone. Hence, we can solve the VI to ε-accuracy with exponential rate using for instance the method in [CSSV22]. ...
The theory of learning in games has so far focused mainly on games with simultaneous moves. Recently, researchers in machine learning have started investigating learning dynamics in games involving hierarchical decision-making. We consider an N-player hierarchical game in which the ith player's objective comprises of an expectation-valued term, parametrized by rival decisions, and a hierarchical term. Such a framework allows for capturing a broad range of stochastic hierarchical optimization problems, Stackelberg equilibrium problems, and leader-follower games. We develop an iteratively regularized and smoothed variance-reduced modified extragradient framework for learning hierarchical equilibria in a stochastic setting. We equip our analysis with rate statements, complexity guarantees, and almost-sure convergence claims. We then extend these statements to settings where the lower-level problem is solved inexactly and provide the corresponding rate and complexity statements.
... Additionally, in general one needs stochastic versions of the used algorithms and which we do not provide in the case of RIFBF. A stochastic variant of the relaxed inertial forward-backward-forward algorithm RIFBF has been proposed and analyzed in (Cui et al., 2021) one and a half years after the first version of this article. The convergence analysis of the stochastic numerical method massively relies on the one carried out for RIFBF. ...
We introduce a relaxed inertial forward-backward-forward (RIFBF) splitting algorithm for approaching the set of zeros of the sum of a maximally monotone operator and a single-valued monotone and Lipschitz continuous operator. This work aims to extend Tseng's forward-backward-forward method by both using inertial effects as well as relaxation parameters. We formulate first a second order dynamical system that approaches the solution set of the monotone inclusion problem to be solved and provide an asymptotic analysis for its trajectories. We provide for RIFBF, which follows by explicit time discretization, a convergence analysis in the general monotone case as well as when applied to the solving of pseudo-monotone variational inequalities. We illustrate the proposed method by applications to a bilinear saddle point problem, in the context of which we also emphasize the interplay between the inertial and the relaxation parameters, and to the training of Generative Adversarial Networks (GANs).
We consider monotone inclusion problems with expectation-valued operators, a class of problems that subsumes convex stochastic optimization problems with possibly smooth expectation-valued constraints as well as subclasses of stochastic variational inequality and equilibrium problems. A direct application of splitting schemes is complicated by the need to resolve problems with expectation-valued maps at each step, a concern addressed via sampling. Accordingly, we propose an avenue for addressing uncertainty in the mapping:
Variance-reduced stochastic modified forward-backward splitting scheme
(
vr-SMFBS
). We consider structured settings when the map can be decomposed into an expectation-valued map
A
and a maximal monotone map
B
with a tractable resolvent. We show that the proposed schemes are equipped with a.s. convergence guarantees, linear (strongly monotone
A
) and
(monotone
A
) rates of convergence while achieving optimal oracle complexity bounds. The rate statements in monotone regimes appear to be amongst the first and leverage the Fitzpatrick gap function for monotone inclusions. Furthermore, the schemes rely on weaker moment requirements on noise and allow for weakening unbiasedness requirements on oracles in strongly monotone regimes. Preliminary numerics on a class of two-stage stochastic variational inequality problems reflect these findings and show that the variance-reduced schemes outperform stochastic approximation (SA) schemes and sample-average approximation approaches. The benefits of attaining deterministic rates of convergence become even more salient when resolvent computation is expensive.
We propose and analyze a new dynamical system with a closed-loop control law in a Hilbert space [Formula: see text], aiming to shed light on the acceleration phenomenon for monotone inclusion problems, which unifies a broad class of optimization, saddle point, and variational inequality (VI) problems under a single framework. Given an operator [Formula: see text] that is maximal monotone, we propose a closed-loop control system that is governed by the operator [Formula: see text], where a feedback law [Formula: see text] is tuned by the resolution of the algebraic equation [Formula: see text] for some [Formula: see text]. Our first contribution is to prove the existence and uniqueness of a global solution via the Cauchy–Lipschitz theorem. We present a simple Lyapunov function for establishing the weak convergence of trajectories via the Opial lemma and strong convergence results under additional conditions. We then prove a global ergodic convergence rate of [Formula: see text] in terms of a gap function and a global pointwise convergence rate of [Formula: see text] in terms of a residue function. Local linear convergence is established in terms of a distance function under an error bound condition. Further, we provide an algorithmic framework based on the implicit discretization of our system in a Euclidean setting, generalizing the large-step hybrid proximal extragradient framework. Even though the discrete-time analysis is a simplification and generalization of existing analyses for a bounded domain, it is largely motivated by the aforementioned continuous-time analysis, illustrating the fundamental role that the closed-loop control plays in acceleration in monotone inclusion. A highlight of our analysis is a new result concerning [Formula: see text]-order tensor algorithms for monotone inclusion problems, complementing the recent analysis for saddle point and VI problems.
Funding: This work was supported in part by the Mathematical Data Science Program of the Office of Naval Research [Grant N00014-18-1-2764] and by the Vannevar Bush Faculty Fellowship Program [Grant N00014-21-1-2941].