
Uday V. Shanbhag- Pennsylvania State University
Uday V. Shanbhag
- Pennsylvania State University
About
169
Publications
13,959
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
4,055
Citations
Introduction
Current institution
Publications
Publications (169)
The mathematical program with equilibrium constraints (MPEC) is a powerful yet challenging class of constrained optimization problems, where the constraints are characterized by a parametrized variational inequality (VI) problem. While efficient algorithms for addressing MPECs and their stochastic variants (SMPECs) have been recently presented, dis...
We consider an N-player hierarchical game in which the ith player’s objective comprises of an expectation-valued term, parametrized by rival decisions, and a hierarchical term. Such a framework allows for capturing a broad range of stochastic hierarchical optimization problems, Stackelberg equilibrium problems, and leader-follower games. We develop...
We consider a class of smooth $N$-player noncooperative games, where player objectives are expectation-valued and potentially nonconvex. In such a setting, we consider the largely open question of efficiently computing a suitably defined {\em quasi}-Nash equilibrium (QNE) via a single-step gradient-response framework. First, under a suitably define...
In this paper, we consider a distributed learning problem in a subnetwork zero-sum game, where agents are competing in different subnetworks. These agents are connected through time-varying graphs where each agent has its own cost function and can receive information from its neighbors. We propose a distributed mirror descent algorithm for computin...
In this paper, we consider a strongly convex stochastic optimization problem and propose three classes of variable sample-size stochastic first-order methods: (i) the standard stochastic gradient descent method, (ii) its accelerated variant, and (iii) the stochastic heavy-ball method. In each scheme, the exact gradients are approximated by averagin...
With a manifold growth in the scale and intricacy of systems, the challenges of parametric misspecification become pronounced. These concerns are further exacerbated in compositional settings, which emerge in problems complicated by modeling risk and robustness. In “Data-Driven Compositional Optimization in Misspecified Regimes,” the authors consid...
We consider monotone inclusion problems with expectation-valued operators, a class of problems that subsumes convex stochastic optimization problems with possibly smooth expectation-valued constraints as well as subclasses of stochastic variational inequality and equilibrium problems. A direct application of splitting schemes is complicated by the...
We consider a continuous-valued simulation optimization (SO) problem, where a simulator is built to optimize an expected performance measure of a real-world system while parameters of the simulator are estimated from streaming data collected periodically from the system. At each period, a new batch of data is combined with the cumulative data and t...
We consider a class of $N$-player nonsmooth aggregative games over networks in stochastic regimes, where the $i$th player is characterized by a composite cost function $f_i+d_i+r_i$, $f_i$ is a smooth expectation-valued function dependent on its own strategy and an aggregate function of rival strategies, $d_i$ is a convex hierarchical term dependen...
The theory of learning in games has so far focused mainly on games with simultaneous moves. Recently, researchers in machine learning have started investigating learning dynamics in games involving hierarchical decision-making. We consider an $N$-player hierarchical game in which the $i$th player's objective comprises of an expectation-valued term,...
We consider a class of noncooperative hierarchical N-player games where the ith player solves a parametrized stochastic mathematical program with equilibrium constraints (MPEC) with the caveat that the implicit form of the ith player’s in MPEC is convex in player strategy, given rival decisions. Few, if any, general purpose schemes exist for comput...
Mathematical programs with equilibrium constraints (MPECs) represent a class of hierarchical programs that allow for modeling problems in engineering, economics, finance, and statistics. While stochastic generalizations have been assuming increasing relevance, there is a pronounced absence of efficient first/zeroth-order schemes with non-asymptotic...
In this paper, we consider the maximizing of the probability Pζ∣ζ∈K(x) over a closed and convex set X, a special case of the chance-constrained optimization problem. Suppose K(x)≜ζ∈K∣c(x,ζ)≥0, and ζ is uniformly distributed on a convex and compact set K and c(x,ζ) is defined as either c(x,ζ)≜1-ζTxm where m≥0 (Setting A) or c(x,ζ)≜Tx-ζ (Setting B)....
Decision making under uncertainty has been studied extensively over the last 70 years, if not earlier. In the field of optimization, models for two-stage, stochastic, linear programming, presented by Dantzig
[1]
and Beale
[2]
, are often viewed as the basis for the subsequent development of the field of stochastic optimization. This subfield of...
We consider monotone inclusions defined on a Hilbert space where the operator is given by the sum of a maximal monotone operator T and a single-valued monotone, Lipschitz continuous, and expectation-valued operator V. We draw motivation from the seminal work by Attouch and Cabot (Attouch in AMO 80:547–598, 2019, Attouch in MP 184: 243–287) on relax...
The rapid penetration of uncertain renewable energy resources into the power grid has made generation planning and real-time power balancing a challenge, prompting the need for advanced control on the demand-side. Although a grid-integrated building portfolio has been studied by coordinating building-level flexible energy resources, to reap further...
Classical theory for quasi-Newton schemes has focused on smooth, deterministic, unconstrained optimization, whereas recent forays into stochastic convex optimization have largely resided in smooth, unconstrained, and strongly convex regimes. Naturally, there is a compelling need to address nonsmoothness, the lack of strong convexity, and the presen...
In this paper, we consider a distributed learning problem in a subnetwork zero-sum game, where agents are competing in different subnetworks. These agents are connected through time-varying graphs where each agent has its own cost function and can receive information from its neighbors. We propose a distributed mirror descent algorithm for computin...
We consider monotone inclusions defined on a Hilbert space where the operator is given by the sum of a maximal monotone operator $T$ and a single-valued monotone, Lipschitz continuous, and expectation-valued operator $V$. We draw motivation from the seminal work by Attouch and Cabot on relaxed inertial methods for monotone inclusions and present a...
We consider the minimization of an $L_0$-Lipschitz continuous and expectation-valued function, denoted by $f$ and defined as $f(x)\triangleq \mathbb{E}[\tilde{f}(x,\omega)]$, over a Cartesian product of closed and convex sets with a view towards obtaining both asymptotics as well as rate and complexity guarantees for computing an approximate statio...
Classical extragradient schemes and their stochastic counterpart represent a cornerstone for resolving monotone variational inequality problems. Yet, such schemes have a per-iteration complexity of two projections onto a convex set and require two evaluations of the map, the former of which could be relatively expensive. We consider two related ave...
Stochastic MPECs have found increasing relevance for modeling a broad range of settings in engineering and statistics. Yet, there seem to be no efficient first/zeroth-order schemes equipped with non-asymptotic rate guarantees for resolving even deterministic variants of such problems. We consider MPECs where the parametrized lower-level equilibrium...
We consider a class of hierarchical noncooperative $N-$player games where the $i$th player solves a parametrized MPEC with the caveat that the implicit form of the $i$th player's in MPEC is convex in player strategy, given rival decisions. We consider settings where player playoffs are expectation-valued with lower-level equilibrium constraints imp...
In this paper we propose a new operator splitting algorithm for distributed Nash equilibrium seeking under stochastic uncertainty, featuring relaxation and inertial effects. Our work is inspired by recent deterministic operator splitting methods, designed for solving structured monotone inclusion problems. The algorithm is derived from a forward-ba...
We consider an ℓ0-minimization problem where f(x)+γ‖x‖0 is minimized over a polyhedral set and the ℓ0-norm regularizer implicitly emphasizes the sparsity of the solution. Such a setting captures a range of problems in image processing and statistical learning. Given the nonconvex and discontinuous nature of this norm, convex regularizers as substit...
We consider a misspecified optimization problem that requires minimizing a
function f(x;q*) over a closed and convex set X where q* is an unknown vector
of parameters that may be learnt by a parallel learning process. In this
context, We examine the development of coupled schemes that generate iterates
{x_k,q_k} as k goes to infinity, then {x_k} co...
We consider monotone inclusion problems where the operators may be expectation-valued. A direct application of proximal and splitting schemes is complicated by resolving problems with expectation-valued maps at each step, a concern that is addressed by using sampling. Accordingly, we propose avenues for addressing uncertainty in the mapping. (i) Va...
In this paper, we study a stochastic strongly convex optimization problem and propose three classes of variable sample-size stochastic first-order methods including the standard stochastic gradient descent method, its accelerated variant, and the stochastic heavy ball method. In the iterates of each scheme, the unavailable exact gradients are appro...
This work considers the minimization of a sum of an expectation-valued coordinate-wise smooth nonconvex function and a nonsmooth block-separable convex regularizer. We propose an asynchronous variance-reduced algorithm, where in each iteration, a single block is randomly chosen to update its estimates by a proximal variable sample-size stochastic g...
We consider the stochastic variational inequality problem in which the map is expectation-valued in a component-wise sense. Much of the available convergence theory and rate statements for stochastic approximation schemes are limited to monotone maps. However, non-monotone stochastic variational inequality problems are not uncommon and are seen to...
We consider the structured stochastic convex program requiring the minimization of
$E[\tilde f (x, \xi)]+E[\tilde g(y, \xi)]$
subject to the constraint
$Ax + By = b$
. Motivated by the need for decentralized schemes, we propose a stochastic inexact ADMM (SI-ADMM) framework where subproblems are solved inexactly via stochastic approximation sche...
We consider a stochastic variational inequality (SVI) problem with a continuous and monotone mapping over a closed and convex set. In strongly monotone regimes, we present a variable sample-size averaging scheme (VS-Ave) that achieves a linear rate with an optimal oracle complexity. In addition, the iteration complexity is shown to display a muted...
Central limit theorems represent among the most celebrated of limit theorems in probability theory ( Lindeberg 1922 , Feller 1945 ). It may be recalled that the sum of n independent and identically distributed zero mean square integrable random variables grows at the rate of [Formula: see text]. Consequently, by dividing this sum by [Formula: see t...
In this paper, we consider a large-scale convex-concave saddle point problem that arises in many machine learning problems, such as robust classification, kernel matrix learning, etc. To contend with the challenges in computing full gradients, we employ a block-coordinate primal-dual scheme in which a randomly selected primal and dual block of vari...
Classical extragradient schemes and their stochastic counterpart represent a cornerstone for resolving monotone variational inequality problems. Yet, such schemes have a per-iteration complexity of two projections on a convex set and two evaluations of the map, the former of which could be relatively expensive if $X$ is a complicated set. We consid...
Motivated by multi-user optimization problems and non-cooperative Nash games in uncertain regimes, we consider stochastic Cartesian variational inequalities (SCVI) where the set is given as the Cartesian product of a collection of component sets. First, we consider the case where the number of the component sets is large. For solving this type of p...
This paper considers a stochastic Nash game in which each player minimizes an expectation valued composite objective. We make the following contributions. (I) Under suitable monotonicity assumptions on the concatenated gradient map, we derive ({\bf optimal}) rate statements and oracle complexity bounds for the proposed variable sample-size proximal...
This paper considers an $N$-player stochastic Nash game in which the $i$th player minimizes a composite objective $f_i(x) + r_i(x_i)$, where $f_i$ is expectation-valued and $r_i$ has an efficient prox-evaluation. In this context, we make the following contributions. (i) Under a strong monotonicity assumption on the concatenated gradient map, we der...
Virtual transactions are financial positions that allow market participants to exploit arbitrage opportunities arising when day-ahead electricity prices are predictably higher or lower than expected real-time prices. Unprofitable virtual transactions may be used to move day-ahead prices in a direction that enhances the value of related positions, l...
This paper considers the minimization of a sum of an expectation-valued smooth nonconvex function and a nonsmooth block-separable convex regularizer. By combining a randomized block-coordinate descent method with a proximal variable sample-size stochastic gradient (VSSG) method, we propose a randomized block proximal VSSG algorithm. In each iterati...
In this paper we propose a class of randomized primal-dual methods to contend with large-scale saddle point problems defined by a convex-concave function $\mathcal{L}(\mathbf{x},y)\triangleq\sum_{i=1}^m f_i(x_i)+\Phi(\mathbf{x},y)-h(y)$. We analyze the convergence rate of the proposed method under the settings of mere convexity and strong convexity...
In the last several years, stochastic quasi-Newton (SQN) methods have assumed increasing relevance in solving a breadth of machine learning and stochastic optimization problems. Inspired by recently presented SQN schemes [1],[2],[3], we consider merely convex and possibly nonsmooth stochastic programs and utilize increasing sample-sizes to allow fo...
We consider a class of structured nonsmooth stochastic convex programs. Traditional stochastic approximation schemes in nonsmooth regimes are hindered by a convergence rate of $\mathcal{O}(1/\sqrt{k})$ compared with a linear and sublinear (specifically $\mathcal{O}(1/k^2)$) in deterministic strongly convex and convex regimes, respectively. One aven...
We consider minimizing $f(x) = \mathbb{E}[f(x,\omega)]$ when $f(x,\omega)$ is possibly nonsmooth and either strongly convex or convex in $x$. (I) Strongly convex. When $f(x,\omega)$ is $\mu-$strongly convex in $x$, we propose a variable sample-size accelerated proximal scheme (VS-APM) and apply it on $f_{\eta}(x)$, the ($\eta$-)Moreau smoothed vari...
This paper addresses a particular instance of probability maximization problems with random linear inequalities. We consider a novel approach that relies on recent findings in the context of non-Gaussian integrals of positively homogeneous functions. This allows for showing that such a maximization problem can be recast as a convex stochastic optim...
We consider the resolution of the structured stochastic convex program: $\min \ \mathbb{E}[\tilde f(x,\xi)]+\mathbb{E}[\tilde g(y,\xi)]$ such that $Ax + By = b$. To exploit problem structure and allow for developing distributed schemes, we propose a stochastic inexact ADMM ({\bf SI-ADMM}) in which the subproblems are solved inexactly via stochastic...
The distributed computation of equilibria and optima has seen growing interest in a broad collection of networked problems. We consider the computation of equilibria of convex stochastic Nash games characterized by a possibly nonconvex potential function. Our focus is on two classes of stochastic Nash games: (P1): A potential stochastic Nash game,...
The distributed computation of equilibria and optima has seen growing interest in a broad collection of networked problems. We consider the computation of equilibria of convex stochastic Nash games characterized by a possibly nonconvex potential function. Our focus is on two classes of stochastic Nash games: (P1): A potential stochastic Nash game,...
Motivated by applications arising from large scale optimization and machine learning, we consider stochastic quasi-Newton (SQN) methods for solving unconstrained convex optimization problems. The convergence analysis of the SQN methods, both full and limited-memory variants, require the objective function to be strongly convex. However, this assump...
Motivated by applications arising from large scale optimization and machine learning, we consider stochastic quasi-Newton (SQN) methods for solving unconstrained convex optimization problems. The convergence analysis of the SQN methods, both full and limited-memory variants, require the objective function to be strongly convex. However, this assump...
We consider an $\ell_0$-minimization problem where $f(x) + \|x\|_0$ is minimized over a polyhedral set and the $\ell_0$-norm penalty implicitly emphasizes sparsity of the solution. Such a setting captures a range of problems in image processing and statistical learning. However, given the the nonconvex and discontinuous nature of this norm, convex...
This paper formally introduces and studies a non-cooperative multi-agent game under uncertainty. The well-known Nash equilibrium is employed as the solution concept of the game. While there are several formulations of a stochastic Nash equilibrium problem, we focus mainly on a two-stage setting of the game wherein each agent is risk-averse and solv...
This work considers a stochastic Nash game in which each player solves a parameterized stochastic optimization problem. In deterministic regimes, best-response schemes have been shown to be convergent under a suitable spectral property associated with the proximal best-response map. However, a direct application of this scheme to stochastic setting...
This work considers a stochastic Nash game in which each player solves a parameterized stochastic optimization problem. In deterministic regimes, best-response schemes have been shown to be convergent under a suitable spectral property associated with the proximal best-response map. However, a direct application of this scheme to stochastic setting...
Motivated by multi-user optimization problems and non-cooperative Nash games in uncertain regimes, we consider stochastic Cartesian variational inequalities (SCVI) where the set is given as the Cartesian product of a collection of component sets. First, we consider the case where the number of the component sets is large. For solving this type of p...
We consider a misspecified optimization problem that requires minimizing a convex function $f(x;\theta^*)$ in $x$ over a conic constraint set represented by $h(x;\theta^*) \in \mathcal{K}$, where $\theta^*$ is an unknown (or misspecified) vector of parameters, $\mathcal{K}$ is a proper cone, and $h$ is affine in $x$. Suppose $\theta^*$ is not avail...
We consider the misspecified optimization problem of minimizing a convex function $f(x;\theta^*)$ in $x$ over a conic constraint set represented by $h(x;\theta^*) \in \mathcal{K}$, where $\theta^*$ is an unknown (or misspecified) vector of parameters, $\mathcal{K}$ is a closed convex cone and $h$ is affine in $x$. Suppose $\theta^*$ is unavailable...
We consider a class of Nash games, termed as aggregative games, being played over a networked system. In an aggregative game, a player's objective is a function of the aggregate of all the players' decisions. Every player maintains an estimate of this aggregate, and the players exchange this information with their local neighbors over a connected n...
We consider a class of Nash games, termed as aggregative games, being played over a networked system. In an aggregative game, a player's objective is a function of the aggregate of all the players' decisions. Every player maintains an estimate of this aggregate, and the players exchange this information with their local neighbors over a connected n...
Motivated by applications in optimization and machine learning, we consider stochastic quasi-Newton (SQN) methods for solving stochastic optimization problems. In the literature, the convergence analysis of these algorithms relies on strong convexity of the objective function. To our knowledge, no theoretical analysis is provided for the rate state...
We consider the minimization of a convex expectation-valued objective $\mathbb{E}[f(x;\theta^*,\xi)]$ over a closed and convex set $X$ in a regime where $\theta^*$ is unavailable and $\xi$ is a suitably defined random variable. Instead, $\theta^*$ may be obtained through the solution of a learning problem that requires minimizing a metric $\mathbb{...
We consider a misspecified optimization problem that requires minimizing of a
convex function f (x;q*) in x over a constraint set represented by h(x;q*)<=0,
where q* is an unknown (or misspecified) vector of parameters. Suppose q* is
learnt by a distinct process that generates a sequence of estimators q_k, each
of which is an increasingly accurate...
We consider a stochastic convex optimization problem that requires minimizing
a sum of misspecified agentspecific expectation-valued convex functions over
the intersection of a collection of agent-specific convex sets. This
misspecification is manifested in a parametric sense and may be resolved
through solving a distinct stochastic convex learning...
Variational inequality and complementarity problems have found utility in
modeling a range of optimization and equilibrium problems. Yet, while there has
been tremendous growth in addressing uncertainty in optimization, relatively
less progress has been seen in the context of variational inequality problems,
exceptions being efforts to solve variat...
We consider a misspecified optimization problem that requires minimizing of a convex function f(x; θ) in x over a closed and convex set X where θ∗ is an unknown vector of parameters. Suppose θ∗ may be learnt by a parallel learning process that generates a sequence of estimators θk, each of which is an increasingly accurate approximation of θ. In th...
Variational inequality and complementarity problems have found utility in modeling a range of optimization and equilibrium problems arising in engineering, economics, and the sciences. Yet, while there have been tremendous growth in addressing uncertainty in optimization, far less progress has been seen in the context of variational inequality prob...
We consider multi-user optimization problems and Nash games with stochastic convex objectives, instances of which arise in decentralized control problems. The associated equilibrium conditions of both problems can be cast as Cartesian stochastic variational inequality problems with mappings that are strongly monotone but not necessarily Lipschitz c...
This paper considers stochastic variational inequality (SVI) problems where
the mapping is merely monotone and not necessarily Lipschitz continuous.
Traditionally, stochastic approximation schemes for SVIs have relied on strong
monotonicity and Lipschitzian properties of the underlying map. In the first
part of the paper, we weaken these assumption...
The variational inequality problem represents an effective tool for capturing
a range of phenomena arising in engineering, economics, and applied sciences.
Prompted by the role of uncertainty, recent efforts have considered both the
analysis as well as the solution of the associated stochastic variational
inequality problem where the map is expecta...
In the financial industry, risk has been traditionally managed by the imposition of value-at-risk or VaR constraints on portfolio risk exposure. Motivated by recent events in the financial industry, we examine the role that risk-seeking traders play in the accumulation of large and possibly infinite risk. We proceed to show that when traders employ...
Variational inequality problems find wide applicability in modeling a range of optimization and equilibrium problems. We consider the stochastic generalization of such a problem wherein the mapping is pseudomonotone and make two sets of contributions in this paper. First, we provide sufficiency conditions for the solvability of such problems that d...
We consider stochastic variational inequality problems where the mapping is
monotone over a compact convex set. We present two robust variants of
stochastic extragradient algorithms for solving such problems. Of these, the
first scheme employs an iterative averaging technique where we consider a
generalized choice for the weights in the averaged se...