Article

ADMM-Type Methods for Generalized Nash Equilibrium Problems in Hilbert Spaces

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Two Gauss-Seidel-type ADMM-methods for the problem class (GNEP) are introduced in Chapter 6, which is based on [19]. The first method uses a fixed reg ularization parameter, whereas the second increases the regularization parameter if necessary. ...
... In the above section we have seen that Algorithm 5.1 can be interpreted as a forward-backward splitting method, which is exploited in more detail in the se quel. Here we present an alternative convergence theorem and its proof, using a technique that was discovered in the later work [19] of the authors and therefore was not presented in the underlying manuscript [20]. Since the choice of β > 0 is arbitrary in this alternative convergence theorem, it is an actual improvement of Theorem 5.6. ...
... The first one uses a fixed penalty parameter, whereas the second one increases the penalty parameter if necessary and therefore it can be expected to converge faster than the first method. The convergence anal ysis presented here is based on [19]. We start again by recalling the generalized Nash equilibrium problem (GNEP) with N players ν. ...
Thesis
Full-text available
This thesis is concerned with a certain class of algorithms for the solution of constrained optimization problems and generalized Nash equilibrium problems in Hilbert spaces. This class of algorithms is inspired by the alternating direction method of multipliers (ADMM) and eliminates the constraints using an augmented Lagrangian approach. The alternating direction method consists of splitting the augmented Lagrangian subproblem into smaller and more easily manageable parts. Before the algorithms are discussed, a substantial amount of background material, including the theory of Banach and Hilbert spaces, fixed-point iterations as well as convex and monotone set-valued analysis, is presented. Thereafter, certain optimization problems and generalized Nash equilibrium problems are reformulated and analyzed using variational inequalities and set-valued mappings. The analysis of the algorithms developed in the course of this thesis is rooted in these reformulations as variational inequalities and set-valued mappings. The first algorithms discussed and analyzed are one weakly and one strongly convergent ADMM-type algorithm for convex, linearly constrained optimization. By equipping the associated Hilbert space with the correct weighted scalar product, the analysis of these two methods is accomplished using the proximal point method and the Halpern method. The rest of the thesis is concerned with the development and analysis of ADMM-type algorithms for generalized Nash equilibrium problems that jointly share a linear equality constraint. The first class of these algorithms is completely parallelizable and uses a forward-backward idea for the analysis, whereas the second class of algorithms can be interpreted as a direct extension of the classical ADMM-method to generalized Nash equilibrium problems. At the end of this thesis, the numerical behavior of the discussed algorithms is demonstrated on a collection of examples.
... Problems of this form are ubiquitous in control and engineering. Important examples include inverse problems [1], generalized Nash equilibrium problems [2], [3], [4], domain decomposition for PDEs [5], [6], and many more. Motivated by these applications, we present an operator splitting method designed to approach a specific solution of (P) in a Hilbertian framework using a new dynamical system featuring multiscale aspects. ...
... Finding a Nash equilibrium of the game can in this setting by casted as the search for a solution to the variational inequality (P) with the data just defined. 3 To deal with the constraints present in the general formulation (P), we study the trajectories defined by the continuous-time dynamical system with components (p, x), defined as ...
Preprint
Full-text available
Solving equilibrium problems under constraints is an important problem in optimization and optimal control. In this context an important practical challenge is the efficient incorporation of constraints. We develop a continuous-time method for solving constrained variational inequalities based on a new penalty regulated dynamical system in a general potentially infinite-dimensional Hilbert space. In order to obtain strong convergence of the issued trajectory of our method, we incorporate an explicit Tikhonov regularization parameter in our method, leading to a class of time-varying monotone inclusion problems featuring multiscale aspects. Besides strong convergence, we illustrate the practical efficiency of our developed method in solving constrained min-max problems.
... We refer interested readers to the survey paper by Facchinei and Kanzow [18] and Fischer et al. [20] for a comprehensive overview. In recent years, there has been a growing interest in stochastic GNE, see [38,30], and GNE with infinite dimensional action spaces, particularly in the context of optimal control where the players' strategy spaces can be Banach spaces [28] or Hilbert spaces [8]. However, as mentioned above that the best response strategy for each player in (generalized) Bayesian games is a response function of its own type, many existing results for (generalized) Nash games cannot be applied directly to (generalized) Bayesian game any more. ...
Preprint
Bayesian game is a strategic decision-making model where each player's type parameter characterizing its own objective is private information: each player knows its own type but not its rivals' types, and Bayesian Nash equilibrium (BNE) is an outcome of this game where each player makes a strategic optimal decision according to its own type under the Nash conjecture. In this paper, we advance the literature by considering a generalized Bayesian game where each player's action space depends on its own type parameter and the rivals' actions. This reflects the fact that in practical applications, a firm's feasible action is often related to its own type (e.g. marginal cost) and the rivals' actions (e.g. common resource constraints in a competitive market). Under some moderate conditions, we demonstrate existence of continuous generalized Bayesian Nash equilibria (GBNE) and uniqueness of such an equilibrium when each player's action space is only dependent on its type. In the case that each player's action space is also dependent on rivals' actions, we give a simple example to show that uniqueness of GBNE is not guaranteed under standard monotone conditions. To compute an approximate GBNE, we restrict each player's response function to the space of polynomial functions of its type parameter and consequently convert the GBNE problem to a stochastic generalized Nash equilibrium problem (SGNE). To justify the approximation, we discuss convergence of the approximation scheme. Some preliminary numerical test results show that the approximation scheme works well.
... Our algorithm differs from prior work [38][39][40] in that we decompose the objective and constraints over scenarios. For each scenario, we solve an N -player game with relatively few constraints, and then synchronize across scenarios via ADMM. ...
... Our algorithm differs from prior work [38][39][40] in that we decompose the objective and constraints over scenarios. For each scenario, we solve an N -player game with relatively few constraints, and then synchronize across scenarios via ADMM. ...
Preprint
Full-text available
Decision making in multi-agent games can be extremely challenging, particularly under uncertainty. In this work, we propose a new sample-based approximation to a class of stochastic, general-sum, pure Nash games, where each player has an expected-value objective and a set of chance constraints. This new approximation scheme inherits the accuracy of objective approximation from the established sample average approximation (SAA) method and enjoys a feasibility guarantee derived from the scenario optimization literature. We characterize the sample complexity of this new game-theoretic approximation scheme, and observe that high accuracy usually requires a large number of samples, which results in a large number of sampled constraints. To accommodate this, we decompose the approximated game into a set of smaller games with few constraints for each sampled scenario, and propose a decentralized, consensus ADMM algorithm to efficiently compute a generalized Nash equilibrium of the approximated game. We prove the convergence of our algorithm and empirically demonstrate superior performance relative to a recent baseline.
... Despite the advances in stochastic optimization and variational inequalities, the algorithmic treatment of general monotone inclusion problems under stochastic uncertainty is a largely unexplored field. This is rather surprising given the vast amount of applications of maximally monotone inclusions in control and engineering, encompassing distributed computation of generalized Nash equilibria [18][19][20], traffic systems [21][22][23], and PDE-constrained optimization [24]. The first major aim of this manuscript is to introduce and investigate a relaxed inertial stochastic forward-backward-forward (RISFBF) method, building on an operator splitting scheme originally due to Paul Tseng [25]. ...
Article
Full-text available
We consider monotone inclusions defined on a Hilbert space where the operator is given by the sum of a maximal monotone operator T and a single-valued monotone, Lipschitz continuous, and expectation-valued operator V. We draw motivation from the seminal work by Attouch and Cabot (Attouch in AMO 80:547–598, 2019, Attouch in MP 184: 243–287) on relaxed inertial methods for monotone inclusions and present a stochastic extension of the relaxed inertial forward–backward-forward method. Facilitated by an online variance reduction strategy via a mini-batch approach, we show that our method produces a sequence that weakly converges to the solution set. Moreover, it is possible to estimate the rate at which the discrete velocity of the stochastic process vanishes. Under strong monotonicity, we demonstrate strong convergence, and give a detailed assessment of the iteration and oracle complexity of the scheme. When the mini-batch is raised at a geometric (polynomial) rate, the rate statement can be strengthened to a linear (suitable polynomial) rate while the oracle complexity of computing an ϵ-solution improves to O(1/ϵ). Importantly, the latter claim allows for possibly biased oracles, a key theoretical advancement allowing for far broader applicability. By defining a restricted gap function based on the Fitzpatrick function, we prove that the expected gap of an averaged sequence diminishes at a sublinear rate of O(1/k) while the oracle complexity of computing a suitably defined ϵ-solution is O(1/ϵ1+a) where a>1. Numerical results on two-stage games and an overlapping group Lasso problem illustrate the advantages of our method compared to competitors.
... Most approximation results for Nash equilibria under monotonicity constraints, such as the ones verified in [5,7,12,13,24,30,36,44], involve discrete time flows. The first study that used a continuous time flow to approximate Nash equilibria in monotone games set in finite dimensions was initiated by Flåm [18]. ...
Preprint
We consider the basic problem of approximating Nash equilibria in noncooperative games. For monotone games, we design continuous time flows which converge in an averaged sense to Nash equilibria. We also study mean field equilibria, which arise in the large player limit of symmetric noncooperative games. In this setting, we will additionally show that the approximation of mean field equilibria is possible under a suitable monotonicity hypothesis.
... Despite the advances in stochastic optimization and variational inequalities, the algorithmic treatment of general monotone inclusion problems under stochastic uncertainty is a largely unexplored field. This is rather surprising given the vast amount of applications of maximally monotone inclusions in control and engineering, encompassing distributed computation of generalized Nash equilibria [15,28,81], traffic systems [31,32,40], and PDE-constrained optimization [8]. The first major aim of this manuscript is to introduce and investigate a relaxed inertial stochastic forwardbackward-forward (RISFBF) method, building on an operator splitting scheme originally due to Paul Tseng [80]. ...
Preprint
Full-text available
We consider monotone inclusions defined on a Hilbert space where the operator is given by the sum of a maximal monotone operator T and a single-valued monotone, Lipschitz continuous, and expectation-valued operator V. We draw motivation from the seminal work by Attouch and Cabot on relaxed inertial methods for monotone inclusions and present a stochastic extension of the relaxed inertial forward-backward-forward (RISFBF) method. Facilitated by an online variance reduction strategy via a mini-batch approach, we show that (RISFBF) produces a sequence that weakly converges to the solution set. Moreover, it is possible to estimate the rate at which the discrete velocity of the stochastic process vanishes. Under strong monotonicity, we demonstrate strong convergence, and give a detailed assessment of the iteration and oracle complexity of the scheme. When the mini-batch is raised at a geometric (polynomial) rate, the rate statement can be strengthened to a linear (suitable polynomial) rate while the oracle complexity of computing an ϵ\epsilon-solution improves to O(1/ϵ)O(1/\epsilon). Importantly, the latter claim allows for possibly biased oracles, a key theoretical advancement allowing for far broader applicability. By defining a restricted gap function based on the Fitzpatrick function, we prove that the expected gap of an averaged sequence diminishes at a sublinear rate of O(1/k) while the oracle complexity of computing a suitably defined ϵ\epsilon-solution is O(1/ϵ1+a)O(1/\epsilon^{1+a}) where a>1a>1. Numerical results on two-stage games and an overlapping group Lasso problem illustrate the advantages of our method compared to stochastic forward-backward-forward (SFBF) and SA schemes.
Preprint
Full-text available
This work presents a new method for online selection of multiple penalty parameters for the alternating direction method of multipliers (ADMM) algorithm applied to optimization problems with multiple constraints or functionals with block matrix components. ADMM is widely used for solving constrained optimization problems in a variety of fields, including signal and image processing. Implementations of ADMM often utilize a single hyperparameter, referred to as the penalty parameter, which needs to be tuned to control the rate of convergence. However, in problems with multiple constraints, ADMM may demonstrate slow convergence regardless of penalty parameter selection due to scale differences between constraints. Accounting for scale differences between constraints to improve convergence in these cases requires introducing a penalty parameter for each constraint. The proposed method is able to adaptively account for differences in scale between constraints, providing robustness with respect to problem transformations and initial selection of penalty parameters. It is also simple to understand and implement. Our numerical experiments demonstrate that the proposed method performs favorably compared to a variety of existing penalty parameter selection methods.
Article
We propose a geometric framework to describe and analyse a wide array of operator splitting methods for solving monotone inclusion problems. The initial inclusion problem, which typically involves several operators combined through monotonicity-preserving operations, is seldom solvable in its original form. We embed it in an auxiliary space, where it is associated with a surrogate monotone inclusion problem with a more tractable structure and which allows for easy recovery of solutions to the initial problem. The surrogate problem is solved by successive projections onto half-spaces containing its solution set. The outer approximation half-spaces are constructed by using the individual operators present in the model separately. This geometric framework is shown to encompass traditional methods as well as state-of-the-art asynchronous block-iterative algorithms, and its flexible structure provides a pattern to design new ones.
Article
Solving equilibrium problems under constraints is an important problemin optimization and optimal control. In this context an important practical challenge is the efficient incorporation of constraints. We develop a continuous-time method for solving constrained variational inequalities based on a new penalty regulated dynamical system in a general potentially infinite-dimensional Hilbert space. In order to obtain strong convergence of the issued trajectory of our method, we incorporate an explicit Tikhonov regularization parameter in our method, leading to a class of time-varyingmonotone inclusion problems featuring multiscale aspects. Besides strong convergence, we illustrate the practical efficiency of our developed method in solving constrained min-max problems.
Article
In this paper, we present an inertial iterative method for solving pseudomonotone equilibrium and fixed point problems in Banach spaces. Under appropriate conditions, we improve the convergence efficiency of our proposed algorithm by introducing a new step size and iteration rule, and further derive a strong convergence theorem. Finally, we demonstrate through numerical experiments that our new algorithm compares favourably with existing methods in terms of convergence behaviour.
Article
Full-text available
In this paper, we extended the well-known alternating direction method of multipliers (ADMM) for optimization problems to generalized Nash equilibrium problems (GNEP) with shared constraints. We developed an ADMM-type algorithm with fixed regularization to tackle the problem (GNEP) where an upper estimate for the operator norm is not known and then we apply a multiplier-penalty in order to get rid of the joint constraints. We equipped the Hilbert space with an appropriate weighted scalar product and it turns out to be weakly convergent under a lipschitz and monotonicity assumption. A proximal term is then added to improve the convergence properties. Furthermore, a comparative analysis of quasi-variational inequality method, interior point method, penalty method and the proposed method are discussed.
Article
Full-text available
The further increase of microgrids generation capacity in the distribution network could trigger the emergence of MW-level microgrids aggregator to participate in the regional energy sharing market and sell excess energy. However, the uncertainty of renewable energy generation will cause the failure of energy sharing and an economic loss for the energy sharing market. Therefore, a novel two-sided market with the clearing price and risk management is proposed. In order to optimize the energy sharing strategy of MGAs, a risk-averse two-stage stochastic game model is established. The objective of the MGA is to maximize the revenue in ESM while the ESM operator aims to keep the system stable and the supply and demand balance. The sample average approximation method is employed to approximate the stochastic Cournot-Nash equilibrium. A distributed market clearing algorithm based on the alternating direction method of multipliers and best response seeking is developed to find the normalized Nash equilibrium. The existence of the SAA Nash equilibrium and the convergence of the best response seeking algorithm are investigated. Numerical simulations prove that the proposed game model can effectively increase the profit of MGAs, control the overbidding risk and decrease the energy sharing cost.
Article
Full-text available
Generalized Nash equilibrium problems are single-shot Nash equilibrium problems, whereby the decisions of all agents are coupled through a shared constraint. Such games are generally challenging to solve as they might give rise to a very large number of solutions. In this context, spanning many equilibria can be interesting to provide meaningful interpretations. In the literature, to compute equilibria, equilibrium problems are classically reformulated as optimization problems, potential games, relaxed and extended games. Applications of these reformulations to an economic dispatch problem under perfect and imperfect competition are provided. Unfortunately, these approaches only enable to describe a very limited part of the equilibrium set. To fill that gap, relying on normalized Nash equilibrium as solution concept, we provide a parametrized decomposition algorithm inspired by the Inexact-ADMM to span many more equilibrium points. Complexifying the setting, we consider an information structure in which the agents can withhold some local information from sensitive data, resulting in private coupling constraints. The convergence of the algorithm and deviations in the players’ strategies at equilibrium are formally analyzed. In addition, the algorithm can be used to coordinate the agents on one specific equilibrium with desirable properties at the system level. The coordination game is formulated as a principal-agent problem, and a procedure is detailed to compute the equilibrium that minimizes a secondary cost function capturing system-level properties. Finally, the Inexact-ADMM is applied to a cellular resource allocation problem, exhibiting better convergence rate than vanilla ADMM, and to compute equilibria that achieve both system-level efficiency and maximum fairness.
Article
Full-text available
We propose a proximal algorithm for minimizing objective functions consisting of three summands: the composition of a nonsmooth function with a linear operator, another nonsmooth function (with each of the nonsmooth summands depending on an independent block variable), and a smooth function which couples the two block variables. The algorithm is a full splitting method, which means that the nonsmooth functions are processed via their proximal operators, the smooth function via gradient steps, and the linear operator via matrix times vector multiplication. We provide sufficient conditions for the boundedness of the generated sequence and prove that any cluster point of the latter is a KKT point of the minimization problem. In the setting of the Kurdyka-Lojasiewicz property, we show global convergence and derive convergence rates for the iterates in terms of the Lojasiewicz exponent.
Article
Full-text available
We consider a regularized version of a Jacobi-type alternating direction method of multipliers (ADMM) for the solution of a class of separable convex optimization problems in a Hilbert space. The analysis shows that this method is equivalent to the standard proximal-point method applied in a Hilbert space with a transformed scalar product. The method therefore inherits the known convergence results from the proximal-point method and allows suitable modifications to get a strongly convergent variant. Some additional properties are also shown by exploiting the particular structure of the ADMM-type solution method. Applications and numerical results are provided for the domain decomposition method and potential (generalized) Nash equilibrium problems in a Hilbert space setting.
Article
Full-text available
In this paper, we propose a distributed algorithm for computation of a generalized Nash equilibrium (GNE) in noncooperative games over networks. We consider games in which the feasible decision sets of all players are coupled together by a globally shared affine constraint. Adopting the variational GNE as a refined solution, we reformulate the problem as that of finding the zeros of a sum of monotone operators through a primal–dual analysis and an augmentation of variables. Then we introduce a distributed algorithm based on forward–backward operator splitting methods. Each player only needs to know its local objective function, local feasible set, and a local block of the affine constraint, and share information with its neighbours. However, each player also needs to observe the decisions that its objective function directly depends on to evaluate its local gradient. We show convergence of the proposed algorithm for fixed step-sizes under some mild assumptions. Moreover, a distributed algorithm with inertia is also introduced and analysed for distributed variational GNE seeking. Numerical simulations are given for networked Cournot competition with bounded market capacities, to illustrate the algorithm efficiency and performance.
Article
Full-text available
We propose two numerical algorithms for minimizing the sum of a smooth function and the composition of a nonsmooth function with a linear operator in the fully nonconvex setting. The iterative schemes are formulated in the spirit of the proximal and, respectively, proximal linearized alternating direction method of multipliers. The proximal terms are introduced through variable metrics, which facilitates the derivation of proximal splitting algorithms for nonconvex complexly structured optimization problems as particular instances of the general schemes. Convergence of the iterates to a KKT point of the objective function is proved under mild conditions on the sequence of variable metrics and by assuming that a regularization of the associated augmented Lagrangian has the Kurdyka-Lojasiewicz property. If the augmented Lagrangian has the Lojasiewicz property, then convergence rates of both augmented Lagrangian and iterates are derived.
Article
Full-text available
This paper introduces a parallel and distributed algorithm for solving the following minimization problem with linear constraints: minimize  f1(x1)++fN(xN)subject to  A1x1 ++ANxN=c,x1X1, , xNXN,\begin{aligned} \text {minimize} ~~&f_1(\mathbf{x}_1) + \cdots + f_N(\mathbf{x}_N)\\ \text {subject to}~~&A_1 \mathbf{x}_1 ~+ \cdots + A_N\mathbf{x}_N =c,\\&\mathbf{x}_1\in {\mathcal {X}}_1,~\ldots , ~\mathbf{x}_N\in {\mathcal {X}}_N, \end{aligned}where N2N \ge 2, fif_i are convex functions, AiA_i are matrices, and Xi{\mathcal {X}}_i are feasible sets for variable xi\mathbf{x}_i. Our algorithm extends the alternating direction method of multipliers (ADMM) and decomposes the original problem into N smaller subproblems and solves them in parallel at each iteration. This paper shows that the classic ADMM can be extended to the N-block Jacobi fashion and preserve convergence in the following two cases: (i) matrices AiA_i are mutually near-orthogonal and have full column-rank, or (ii) proximal terms are added to the N subproblems (but without any assumption on matrices AiA_i). In the latter case, certain proximal terms can let the subproblem be solved in more flexible and efficient ways. We show that xk+1xkM2\Vert {\mathbf {x}}^{k+1} - {\mathbf {x}}^k\Vert _M^2 converges at a rate of o(1 / k) where M is a symmetric positive semi-definte matrix. Since the parameters used in the convergence analysis are conservative, we introduce a strategy for automatically tuning the parameters to substantially accelerate our algorithm in practice. We implemented our algorithm (for the case ii above) on Amazon EC2 and tested it on basis pursuit problems with >300 GB of distributed data. This is the first time that successfully solving a compressive sensing problem of such a large scale is reported.
Article
Full-text available
Recently, the alternating direction method of multipliers (ADMM) has found many efficient applications in various areas; and it has been shown that the convergence is not guaranteed when it is directly extended to the multiple-block case of separable convex minimization problems where there are m3m\ge 3 functions without coupled variables in the objective. This fact has given great impetus to investigate various conditions on both the model and the algorithm's parameter that can ensure the convergence of the direct extension of ADMM (abbreviated as "e-ADMM"). Despite some results under very strong conditions (e.g., at least (m1)(m-1) functions should be strongly convex) that are applicable to the generic case with a general m, some others concentrate on the special case of m=3 under the relatively milder condition that only one function is assumed to be strongly convex. We focus on extending the convergence analysis from the case of m=3 to the more general case of m3m\ge3. That is, we show the convergence of e-ADMM for the case of m3m\ge 3 with the assumption of only (m2)(m-2) functions being strongly convex; and establish its convergence rates in different scenarios such as the worst-case convergence rates measured by iteration complexity and the asymptotically linear convergence rate under stronger assumptions. Thus the convergence of e-ADMM for the general case of m4m\ge 4 is proved; this result seems to be still unknown even though it is intuitive given the known result of the case of m=3. Even for the special case of m=3, our convergence results turn out to be more general than the exiting results that are derived specifically for the case of m=3.
Article
Full-text available
Linearly constrained convex optimization has many applications. The first-order optimal condition of the linearly constrained convex optimization is a monotone variational inequality (VI). For solving VI, the proximal point algorithm (PPA) in Euclidean-norm is classical but abstract. Hence, the classical PPA only plays an important theoretical role and it is rarely used in the practical scientific computation. In this paper, we give a review on the recently developed customized PPA in H-norm (H is a positive definite matrix). In the frame of customized PPA, it is easy to construct the contraction-type methods for convex optimization with different linear constraints. In each iteration of the proposed methods, we need only to solve the proximal subproblems which have the closed form solutions or can be efficiently solved up to a high precision. Some novel applications and numerical experiments are reported. Additionally, the original primal-dual hybrid gradient method is modified to a convergent algorithm by using a prediction-correction uniform framework. Using the variational inequality approach, the contractive convergence and convergence rate proofs of the framework are more general and quite simple. © 2015, Operations Research Society of China, Periodicals Agency of Shanghai University, Science Press, and Springer-Verlag Berlin Heidelberg.
Article
Full-text available
Generalized Nash equilibrium problems have become very important as a modeling tool during the last decades. The aim of this survey paper is twofold. It summarizes recent advances in the research on computational methods for generalized Nash equilibrium problems and points out current challenges. The focus of this survey is on algorithms and their convergence properties. Therefore, we also present reformulations of the generalized Nash equilibrium problem, results on error bounds and properties of the solution set of the equilibrium problems.
Article
Full-text available
The augmented Lagrangian method (ALM) is a benchmark for solving a convex minimization model with linear constraints. We consider the special case where the objective is the sum of m functions without coupled variables. For solving this separable convex minimization model, it is usually required to decompose the ALM subproblem at each iteration into m smaller subproblems, each of which only involves one function in the original objective. Easier subproblems capable of taking full advantage of the functions' properties individually could thus be generated. In this paper, we focus on the case where full Jacobian decomposition is applied to ALM subproblems, i.e., all the decomposed ALM subproblems are eligible for parallel computation at each iteration. For the first time, we show, by an example, that the ALM with full Jacobian decomposition could be divergent. To guarantee the convergence, we suggest combining a relaxation step and the output of the ALM with full Jacobian decomposition. A novel analysis is presented to illustrate how to choose refined step sizes for this relaxation step. Accordingly, a new splitting version of the ALM with full Jacobian decomposition is proposed. We derive the worst-case O(1/k) convergence rate measured by the iteration complexity (where k represents the iteration counter) in both the ergodic and nonergodic senses for the new algorithm. Finally, some numerical results are reported to show the efficiency of the new algorithm.
Article
Full-text available
The augmented Lagrangian method (ALM) is a benchmark for solving convex minimization problems with linear constraints. When the objective function of the model under consideration is representable as the sum of some functions without coupled variables, a Jacobian or Gauss–Seidel decomposition is often implemented to decompose the ALM subproblems so that the functions’ properties could be used more effectively in algorithmic design. The Gauss–Seidel decomposition of ALM has resulted in the very popular alternating direction method of multipliers (ADMM) for two-block separable convex minimization models and recently it was shown in He et al. (Optimization Online, 2013) that the Jacobian decomposition of ALM is not necessarily convergent. In this paper, we show that if each subproblem of the Jacobian decomposition of ALM is regularized by a proximal term and the proximal coefficient is sufficiently large, the resulting scheme to be called the proximal Jacobian decomposition of ALM, is convergent. We also show that an interesting application of the ADMM in Wang et al. (Pac J Optim, to appear), which reformulates a multiple-block separable convex minimization model as a two-block counterpart first and then applies the original ADMM directly, is closely related to the proximal Jacobian decomposition of ALM. Our analysis is conducted in the variational inequality context and is rooted in a good understanding of the proximal point algorithm.
Article
Full-text available
The alternating direction method of multipliers (ADMM) is now widely used in many fields, and its convergence was proved when two blocks of variables are alternatively updated. It is strongly desirable and practically valuable to extend the ADMM directly to the case of a multi-block convex minimization problem where its objective function is the sum of more than two separable convex functions. However , the convergence of this extension has been missing for a long time—neither an affirmative convergence proof nor an example showing its divergence is known in the literature. In this paper we give a negative answer to this long-standing open question: The direct extension of ADMM is not necessarily convergent. We present a sufficient condition to ensure the convergence of the direct extension of ADMM, and give an example to show its divergence.
Article
Full-text available
The formulation and the semismooth Newton solution of Nash equilibrium multiobjective elliptic optimal control problems are presented. Existence and uniqueness of a Nash equilibrium are proved. The corresponding solution is characterized by an optimality system that is approximated by second-order finite differences and solved with a semismooth Newton scheme. It is demonstrated that the numerical solution is second-order accurate and that the semismooth Newton iteration is globally and locally quadratically convergent. Results of numerical experiments confirm the theoretical estimates and show the effectiveness of the proposed computational framework.
Article
Full-text available
We consider the linearly constrained separable convex programming whose objective function is separable into m individual convex functions with non-overlapping variables. The alter-nating direction method (ADM) has been well studied in the literature for the special case m = 2. But the convergence of extending ADM to the general case m ≥ 3 is still open. In this paper, we show that the straightforward extension of ADM is valid for the general case m ≥ 3 if a Gaussian back substitution procedure is combined. The resulting ADM with Gaussian back substitution is a novel approach towards the extension of ADM from m = 2 to m ≥ 3, and its algorithmic framework is new in the literature. For the ADM with Gaussian back substitution, we prove its convergence via the analytic framework of contractive type methods and we show its numerical efficiency by some application problems.
Article
Full-text available
Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for ℓ1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.
Article
This paper deals with generalized Nash equilibrium problems (GNEPs) in Banach spaces. We give an existence result for normalized equilibria of jointly convex GNEPs and then propose an augmented Lagrangian-type algorithm for their computation. A thorough convergence analysis is conducted which considers the existence of subproblem solutions as well as the feasibility and optimality of limit points. We then apply our investigations to differential economic games and multiobjective optimal control problems governed by linear partial differential equations. Numerical results are provided to demonstrate the practical performance of the method.
Article
We consider the generalized Nash equilibrium problem in a Hilbert space setting. The joint constraints are eliminated by an augmented Lagrangian-type approach, and we present a fully distributed version by using ideas from alternating direction methods of multipliers (ADMM methods). Convergence follows, under a cocoercivity condition, from the fact that this method can be interpreted as a suitable splitting approach in our Hilbert space endowed with a modified scalar product. This observation also leads to a second algorithmic approach, which yields convergence under a Lipschitz assumption and monotonicity. Numerical results are presented for some examples arising in both finite- and infinite-dimensional Hilbert spaces.
Article
Tseng’s algorithm finds a zero of the sum of a maximally monotone operator and a monotone continuous operator by evaluating the latter twice per iteration. In this paper, we modify Tseng’s algorithm for finding a zero of the sum of three operators, where we add a cocoercive operator to the inclusion. Since the sum of a cocoercive and a monotone-Lipschitz operator is monotone and Lipschitz, we could use Tseng’s method for solving this problem, but implementing both operators twice per iteration and without taking into advantage the cocoercivity property of one operator. Instead, in our approach, although the continuous monotone operator must still be evaluated twice, we exploit the cocoercivity of one operator by evaluating it only once per iteration. Moreover, when the cocoercive or continuous-monotone operators are zero it reduces to Tseng’s algorithm or forward-backward splittings, respectively, unifying in this way both algorithms. In addition, we provide a preconditioned version of the proposed method including non-self-adjoint linear operators in the computation of resolvents and the single-valued operators involved. This approach allows us to also extend previous variable metric versions of the Tseng and forward-backward methods and simplify their conditions on the underlying metrics. We also exploit the case in which non-self-adjoint linear operators are triangular by blocks in the primal-dual product space for solving primal-dual composite monotone inclusions, obtaining Gauss–Seidel-type algorithms which generalize several primal-dual methods available in the literature. Finally we explore applications to the obstacle problem, empirical risk minimization, distributed optimization, and nonlinear programming and we illustrate the performance of the method via some numerical simulations.
Article
This work considers a stochastic Nash game in which each player solves a parameterized stochastic optimization problem. In deterministic regimes, best-response schemes have been shown to be convergent under a suitable spectral property associated with the proximal best-response map. However, a direct application of this scheme to stochastic settings requires obtaining exact solutions to stochastic optimization at each iteration. Instead, we propose an inexact generalization in which an inexact solution is computed via an increasing number of projected stochastic gradient steps. Based on this framework, we present three inexact best-response schemes: (i) First, we propose a synchronous scheme where all players simultaneously update their strategies; (ii) Subsequently, we extend this to a randomized setting where a subset of players is randomly chosen to their update strategies while the others keep their strategies invariant; (iii) Finally, we propose an asynchronous scheme, where each player determines its own update frequency and may use outdated rival-specific data in updating its strategy. Under a suitable contractive property of the proximal best-response map, we derive a.s. convergence of the iterates for (i) and (ii) and mean-convergence for (i) -- (iii). In addition, we show that for (i) -- (iii), the iterates converge to the unique equilibrium in mean at a prescribed linear rate. Finally, we establish the overall iteration complexity in terms of projected stochastic gradient steps for computing an ϵ\epsilon-Nash equilibrium and in all settings, the iteration complexity is O(1/ϵ2(1+c)+δ){\cal O}(1/\epsilon^{2(1+c) + \delta}) where c=0c = 0 in the context of (i) and represents the positive cost of randomization (in (ii)) and asynchronicity and delay (in (iii)). The schemes are further extended to linear and quadratic recourse-based stochastic Nash games.
Article
We propose an augmented Lagrangian-Type algorithm for the solution of generalized Nash equilibrium problems (GNEPs). Specifically, we discuss the convergence properties with regard to both feasibility and optimality of limit points. This is done by introducing a secondary GNEP as a new optimality concept. In this context, special consideration is given to the role of suitable constraint qualifications that take into account the particular structure of GNEPs. Furthermore, we consider the behavior of the method for jointly convex GNEPs and describe a modification which is tailored towards the computation of variational equilibria. Numerical results are included to illustrate the practical performance of the overall method.
Article
Building upon the results in [M. Hintermüller and T. Surowiec, Pac. J. Optim., 9 (2013), pp. 251-273], a class of noncooperative Nash equilibrium problems is presented, in which the feasible set of each player is perturbed by the decisions of their competitors via a convex constraint. In addition, for every vector of decisions, a common "state" variable is given by the solution of an affine linear equation. The resulting problem is therefore a generalized Nash equilibrium problem (GNEP). The existence of an equilibrium for this problem is demonstrated, and first-order optimality conditions are derived under a constraint qualification. An approximation scheme is proposed, which involves the solution of a parameter-dependent sequence of standard Nash equilibrium problems. An associated path-following strategy based on the Nikaido-Isoda function is then proposed. Functionspace- based numerics for parabolic GNEPs and a spot-market model are developed.
Article
The Generalized Nash Equilibrium Problem is an important model that has its roots in the economic sciences but is being fruitfully used in many different fields. In this survey paper we aim at discussing its main properties and solution algorithms, pointing out what could be useful topics for future research in the field.
Article
We deal with jointly convex generalized Nash equilibrium problems in infinite-dimensional spaces. For their solution, we extend a finite-dimensional optimization approach and design a convergent algorithm in Hilbert space. Then we apply our investigations to a class of multiobjective optimal control problems with control and state constraints that are governed by elliptic partial differential equations. We present a new reformulation as a jointly convex generalized Nash equilibrium problem. We study a finite element approximation of such a multiobjective optimal control problem, and further we prove convergence in appropriate function spaces. Finally, we provide some numerical results that show the effectiveness of our algorithm for multiobjective optimal control problems.
Article
The generalized Nash equilibrium problem (GNEP) is an extension of the classical Nash equilibrium problem where both the objective functions and the constraints of each player may depend on the rivals' strategies. This class of problems has a multitude of important engineering applications and yet solution algorithms are extremely scarce. In this paper, we analyze in detail a globally convergent penalty method that has favorable theoretical properties. We also consider strengthened results for a particular subclass of problems very often considered in the literature. Basically our method reduces the GNEP to a single penalized (and nonsmooth) Nash equilibrium problem. We suggest a suitable method for the solution of the latter penalized problem and present extensive numerical results.
Article
We analyze some new decomposition schemes for the solution of generalized Nash equilibrium problems. We prove convergence for a particular class of generalized potential games that includes some interesting engineering problems. We show that some versions of our algorithms can deal also with problems lacking any convexity and consider separately the case of two players for which stronger results can be obtained. KeywordsGeneralized Nash equilibrium problem–Generalized potential game–Decomposition–Regularization