# Paul Tseng's research while affiliated with University of Washington Seattle and other places

**What is this page?**

This page lists the scientific contributions of an author, who either does not have a ResearchGate profile, or has not yet added these contributions to their profile.

It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.

If you're a ResearchGate member, you can follow this page to keep up with this author's work.

If you are this author, and you don't want us to display this page anymore, please let us know.

It was automatically created by ResearchGate to create a record of this author's body of work. We create such pages to advance our goal of creating and maintaining the most comprehensive scientific repository possible. In doing so, we process publicly available (personal) data relating to the author as a member of the scientific community.

If you're a ResearchGate member, you can follow this page to keep up with this author's work.

If you are this author, and you don't want us to display this page anymore, please let us know.

## Publications (121)

We consider incrementally updated gradient methods for minimizing the sum of smooth functions and a convex function. This method can use a (sufficiently small) constant stepsize or, more practically, an adaptive stepsize that is decreased whenever sufficient progress is not made. We show that if the gradients of the smooth functions are Lipschitz c...

We introduce a flexible optimization framework for nuclear norm minimization of matrices with linear structure, including Hankel, Toeplitz, and moment structures and catalog applications from diverse fields under this framework. We discuss various first-order methods for solving the resulting optimization problem, including alternating direction me...

We consider the problem of estimating the volatility of a financial asset from a time series record. We believe the underlying volatility process is smooth, possibly stationary, and with potential small to large jumps due to market news. By drawing parallels between time series and regression models, in particular between stochastic volatility mode...

Recently Wang, Zheng, Boyd, and Ye (SIAM J Optim 19:655–673, 2008) proposed a further relaxation of the semidefinite programming
(SDP) relaxation of the sensor network localization problem, named edge-based SDP (ESDP). In simulation, the ESDP is solved
much faster by interior-point method than SDP relaxation, and the solutions found are comparable...

We consider a class of unconstrained nonsmooth convex optimization problems, in which the objective function is the sum of
a convex smooth function on an open subset of matrices and a separable convex function on a set of matrices. This problem
includes the covariance selection problem that can be expressed as an ℓ
1-penalized maximum likelihood es...

In this paper, we consider a cognitive radio system with one primary (licensed) user and multiple secondary (unlicensed) users. Considering the interference temperature constraints, the secondary users compete for the available spectrum so as to satisfy their need for communication. Borrowing the concept of price from market theory, we develop a de...

We consider the generalized Nash equilibrium problem (GNEP), in which each player’s strategy set may depend on the rivals’
strategies through shared constraints. A practical approach to solving this problem that has received increasing attention
lately entails solving a related variational inequality (VI). From the viewpoint of game theory, it is i...

We propose a first-order interior-point method for linearly constrained smooth optimization that unifies and extends first-order
affine-scaling method and replicator dynamics method for standard quadratic programming. Global convergence and, in the case
of quadratic program, (sub)linear convergence rate and iterate convergence results are derived....

Support vector machines (SVMs) training may be posed as a large quadratic program (QP) with bound constraints and a single
linear equality constraint. We propose a (block) coordinate gradient descent method for solving this problem and, more generally,
linearly constrained smooth optimization. Our method is closely related to decomposition methods...

Convex optimization problems arising in applications, possibly as approximations of intractable problems, are often structured
and large scale. When the data are noisy, it is of interest to bound the solution error relative to the (unknown) solution
of the original noiseless problem. Related to this is an error bound for the linear convergence anal...

We propose a non-linear density estimator, which is locally adaptive, like wavelet estimators, and positive everywhere, without a log- or root-transform. This estimator is based on maximizing a non-parametric log-likelihood function regularized by a total variation penalty. The smoothness is driven by a single penalty parameter, and to avoid cros...

We consider a recently proposed optimization formulation of multi-task learning based on trace norm regularized least squares. While this problem may be formulated as a semidefinite program (SDP), its size is beyond general SDP solvers. Previous solution approaches apply proximal gradient methods to solve the primal problem. We derive new primal an...

We consider the problem of minimizing the sum of a smooth function and a separable convex function. This problem includes as special cases bound-constrained optimization and smooth optimization with ℓ1-regularization. We propose a (block) coordinate gradient descent method for solving this class of nonsmooth separable problems. We establish global...

Sparse over complete representations have attracted much interest recently for their applications to signal processing. In a recent work, Donoho, Elad, and Temlyakov (2006) showed that, assuming sufficient sparsity of the ideal underlying signal and approximate orthogonality of the over complete dictionary, the sparsest representation can be found,...

We consider the problem of minimizing the weighted sum of a smooth function f and a convex function P of n real variables subject to m linear equality constraints. We propose a block-coordinate gradient descent method for solving this problem, with the coordinate
block chosen by a Gauss-Southwell-q rule based on sufficient predicted descent. We est...

An important issue in convex programming concerns duality gap. Various conditions have been developed over the years that
guarantee no duality gap, including one developed by Rockafellar (Network flows and monotropic programming. Wiley-Interscience,
New York, 1984)involving separable objective function and affine constraints. We show that this suff...

Piecewise smooth (PS) functions are perhaps the best-known examples of semismooth functions, which play key roles in the solution of nonsmooth equations and nonsmooth optimization. Recently, there have emerged other examples of semismooth functions, including the p-norm function (1p∞) defined on Rn with n≥2, NCP functions, smoothing/penalty functio...

The question of nonemptiness of the intersection of a nested sequence of closed sets is fundamental in a number of important optimization topics, including the existence of optimal solutions, the validity of the minimax inequality in zero sum games, and the absence of a duality gap in constrained optimization. We introduce the new notion of an asym...

The elastic-mode formulation of the problem of minimizing a nonlinear function subject to equilibrium constraints has appealing local properties in that, for a finite value of the penalty parameter, local solutions satisfying first- and second-order necessary optimality conditions for the original problem are also first- and second-order points of...

The regularization of a convex program is exact if all solutions of the regularized problem are also solutions of the original problem for all values of the regularization parameter below some positive threshold. For a general convex program, we show that the regularization is exact if and only if a certain selection problem has a Lagrange multipli...

M. Fukushima and P. Tseng [SIAM J. Optim. 12, No. 3, 724–739 (2002; Zbl 1005.65064)] have proposed an ε-active set algorithm for solving a mathematical program with a smooth objective function and linear inequality/complementarity constraints. It is asserted therein that, under a uniform LICQ on the ε-feasible set, this algorithm generates iterates...

We consider the NP-hard problem of finding a minimum norm vector in n-dimensional real or complex Euclidean space, subject to m concave homogeneous quadratic constraints. We show that a semidefinite programming (SDP) relaxation for this nonconvex quadratically constrained quadratic program (QP) provides an O(m2) approximation in the real case and a...

The sensor network localization problem has been much studied. Recently Biswas and Ye proposed a semidefinite programming (SDP) relaxation of this problem which has various nice properties and for which a number of solution methods have been proposed. Here, we study a second-order cone programming (SOCP) relaxation of this problem, motivated by its...

We present a unified approach to establishing the existence of global minima of a (non)convex constrained optimization problem. Our results unify and generalize previous existence results for convex and nonconvex programs, including the Frank-Wolfe theorem, and for (quasi) convex quadratically constrained quadratic programs and convex polynomial pr...

We consider convex constrained optimization problems, and we enhance the clas- sical Fritz John optimality conditions to assert the existence of multipliers with special sensitivity properties. In particular, we prove the existence of Fritz John multipliers that are informative in the sense that they identify constraints whose relaxation, at rates...

A popular approach to solving the nonlinear complementarity problem (NCP) is to reformulate it as the global minimization of a certain merit function over ℝn. A popular choice of the merit function is the squared norm of the Fischer-Burmeister function, shown to be smooth over ℝn and, for monotone NCP, each stationary point is a solution of the NCP...

We study convergence properties of Dikin’s affine scaling algorithm applied to nonconvex quadratic minimization. First, we
show that the objective function value either diverges or converges Q-linearly to a limit. Using this result, we show that,
in the case of box constraints, the iterates converge to a unique point satisfying first-order and weak...

A simple and yet powerful method is presented to estimate nonlinearly and nonparametrically the components of additive models using wavelets. The estimator enjoys the good statistical and computational properties of the Waveshrink scatterplot smoother and it can be efficiently computed using the block coordinate relaxation optimization technique. A...

Wavelet-based denoising techniques are well suited to estimate spatially inhomoge- neous signals. Waveshrink (Donoho and Johnstone) assumes independent Gaussian errors and equispaced sampling of the signal. Various articles have relaxed some of these as- sumptions, but a systematic generalization to distributions such as Poisson, binomial, or Berno...

We consider Bayesian nonparametric function estimation using a Markov random field prior based on the Laplace distribution. We describe efficient methods for finding the exact maximum a posteriori estimate, which handle constraints naturally and avoid the problems posed by nondifferentiability of the posterior distribution; the methods also make li...

The EM algorithm is a popular method for maximum likelihood estimation from incomplete data. This method may be viewed as a proximal point method for maximizing the log-likelihood function using an integral form of the Kullback-Leibler distance function. Motivated by this interpretation, we consider a proximal point method using an integral form of...

For any function f from ℝ to ℝ, one can define a corresponding function on the space of n×n (block-diagonal) real symmetric matrices by applying f to the eigenvalues of the spectral decomposition. We show that this matrix-valued function inherits from f the properties of continuity, (local) Lipschitz continuity, directional differentiability, Fréch...

Let
be the Lorentz/second-order cone in
. For any function f from
to
, one can define a corresponding function f
soc(x) on
by applying f to the spectral values of the spectral decomposition of x∈
with respect to
. We show that this vector-valued function inherits from f the properties of continuity, (local) Lipschitz continuity, directional differe...

This paper presents a sequential quadratically constrained quadratic programming (SQCQP) method for solving smooth convex programs. The SQCQP method solves at each iteration a subproblem that involves convex quadratic inequality constraints as well as a convex quadratic objective function. Such a quadratically constrained quadratic programming prob...

Recently, interior-point algorithms have been applied to nonlinear and nonconvexoptimization. Most of these algorithms are either primal-dual path-following or anescalingin nature, and some of them are conjectured to converge to a local minimum.We give several examples to show that this may be untrue and we suggest somestrategies for overcoming thi...

We study an infeasible primal-dual interior-point trust-region method for constrained minimization. This method uses a log-barrier function for the slack variables and updates the slack variables using second-order correction. We show that if a certain set containing the initial iterate is bounded and the origin is not in the convex hull of the nea...

Piecewise smooth functions are well-known examples of semismooth functions.

For any function f from R to R, one can define a corresponding function on the space of n × n (block-diagonal) real symmetric matrices by applying f to the eigenvalues of the spectral decomposition. We show that this matrix-valued function inherits from f the properties of continuity, (local) Lipschitz continuity, directional differentiability, Fré...

We propose feasible descent methods for constrained minimization that do not make explicit use of the derivative of the objective
function. The methods iteratively sample the objective function value along a finite set of feasible search arcs and decrease
the sampling stepsize if an improved objective function value is not sampled. The search arcs...

We consider a mathematical program with smooth objective function and linear inequality/complementarity constraints. We propose an #-active set algorithm which, under a uniform LICQ on the #-feasible set, generates iterates whose cluster points are B-stationary points of the problem. If the objective function is quadratic and # is set to zero, the...

We propose a variant of the Nelder-Mead algorithm derived from a reinterpretation of univariate golden-section direct search. In the univariate case, convergence of the variant can be analyzed analogously to golden-section search. In the multivariate case, we modify the variant by replacing strict descent with fortified descent and maintaining the...

We consider the problem of denoising a one-dimensional signal modeled as the realization of a Markov random field (MRF) by maximizing its posterior distribution. We study the maximum a posteriori (MAP) estimate corresponding to the Laplacian (ℓ 1 ) MRF prior to avoid oversmoothing regions with large intensity gradient. Although the MAP estimate is...

Smoothing functions have been much studied in the solution of optimization and complementarity problems with nonnegativity constraints. In this paper, we extend smoothing functions to problems in which the nonnegative orthant is replaced by the direct product of second-order cones. These smoothing functions include the Chen-Mangasarian class and th...

We propose two new methods for the solution of the single commodity, separable convex cost network flow problem: the †-relaxation method and the auction/sequential shortest path method. Both methods were originally developed for linear cost problems and reduce to their linear conterparts when applied to such problems. We show that both methods stem...

In this paper, we analyze the exponential method of multipliers for convex constrained minimization problems, which operates like the usual Augmented Lagrangian method, except that it uses an exponential penalty function in place of the usual quadratic. We also analyze a dual counterpart, the entropy minimization algorithm, which operates like the...

In various penalty/smoothing approaches to solving a linear program, one regularizes the problem by adding to the linear cost function a separable nonlinear function multiplied by a small positive parameter. Popular choices of this nonlinear function include the quadratic function, the logarithm function, and the x ln(x)-entropy function. Furthermo...

For extracting a signal from noisy data, waveshrink and basis
pursuit are powerful tools both from an empirical and asymptotic point
of view. They are especially efficient at estimating spatially
inhomogeneous signals when the noise is Gaussian. Their performance is
altered when the noise has a long tail distribution, for instance, when
outliers ar...

We analyze the convergence rate of an asynchronous space decomposition method for constrained convex minimization in a reexive Banach space. This method includes as special cases parallel domain decomposition methods and multigrid methods for solving elliptic partial dierential equations. In particular, the method generalizes the additive Schwarz d...

Using ordinary calculus techniques, we investigate the conditions under which LeChatelier effects are signable for finite changes in parameter values. We show, for example, that the short run demand for a factor is always less responsive to price changes than the long run demand, provided that the factor of production and the fixed factor do not sw...

We generalize the ε-relaxation method of [14] for the single commodity, linear or separable convex cost network flow problem
to network flow problems with positive gains. The method maintains ε-complementary slackness at all iterations and adjusts
the arc flows and the node prices so as to satisfy flow conservation upon termination. Each iteration...

. The D-gap function has been useful in developing unconstrained descent methods for solving strongly monotone variational inequality problems. We show that the D-gap function has certain properties that are useful also for monotone variational inequality problems with bounded feasible set. Accordingly, we develop two unconstrained methods based on...

Bounded linear regularity, the strong conical hull intersection property (strong CHIP), and the conical hull intersection property (CHIP) are properties of a collection of finitely many closed convex intersecting sets in Euclidean space. It was shown recently that these properties are fundamental in several branches of convex optimization, includin...

Recently, Bradley and Mangasarian [1] studied the problem of finding the nearest plane to m given points in # n in the least square sense. They showed that the problem reduces to finding the least eigenvalue and associated eigenvector of a certain n n symmetric positive semidefinite matrix. We extend this result to the general problem of finding th...

The classes of P -, P 0 -, R 0 -, semimonotone, strictly semimonotone, column sufficient, and nondegenerate matrices play important roles in studying solution properties of equations and complementarity problems and convergence /complexity analysis of methods for solving these problems. It is known that the problem of deciding whether a square matr...

. We consider an incremental gradient method with momentum term for minimizing the sum of continuously di#erentiable functions. This method uses a new adaptive stepsize rule that decreases the stepsize whenever su#cient progress is not made. We show that if the gradients of the functions are bounded and Lipschitz continuous over a certain level set...

this paper, we study the convergence rate of asynchronous block Jacobi and block Gauss-Seidel methods for finite or infinite dimensional convex minimization of the form min

An important class of nonparametric signal processing methods entails forming a set of predictors from an overcomplete set of basis functions associated with a fast transform (e.g., wavelet packets). In these methods, the number of basis functions can far exceed the number of sample values in the signal, leading to an ill-posed prediction problem....

We propose a new simplex-based direct search method for unconstrained minimization of a real- valued function f of n variables. As in other methods of this kind, the intent is to iteratively improve an n-dimensional simplex through certain reflection/expansion/contraction steps. The method has three novel features. First, a user-chosen integer ¯ mk...

Recently Chen and Mangasarian proposed a class of smoothing functions for linear/nonlinear programs and complementarity problems that unifies many previous proposals. Here we study a non-interior continuation method based on these functions in which, like interior path-following methods, the iterates are maintained to lie in a neighborhood of some...

We consider the forward-backward splitting method for finding a zero of the sum of two maximal monotone mappings. This method is known to converge when the inverse of the forward mapping is strongly monotone. We propose a modification to this method, in the spirit of the extragradient method for monotone variational inequalities, under which the me...

Merit functions such as the gap function, the regularized gap function, the implicit Lagrangian, and the norm squared of the
Fischer-Burmeister function have played an important role in the solution of complementarity problems defined over the cone
of nonnegative real vectors. We study the extension of these merit functions to complementarity probl...

An important class of nonparametric signal processing methods is to form a set of predictors from an overcomplete set of basis functions associated with a fast transform. In these methods, the number of basis functions can far exceed the number of samples values in the signal, leading to an ill-posed prediction problem. The 'basis pursuit' denoisin...

We show that, for some Newton-type methods such as primal-dual interior-point path following methods and Chen-Mangasarian smoothing methods, lo-cal superlinear convergence can be shown without assuming the solutions are isolated. The analysis is based on local error bounds on the distance from the iterates to the solution set.

We consider a mixed problem composed in part of finding a zero of a maximal monotone operator and in part of solving a monotone variational inequality problem. We propose a solution method for this problem that alternates between a proximal step (for the maximal monotone operator part) and a projection-type step (for the monotone variational inequa...

We propose a new method for the solution of the single commodity, separable convex cost network flow problem. The method generalizes the ε-relaxation method developed for linear cost problems and reduces to that method when applied to linear cost problems. We show that the method terminates with a near optimal solution, and we provide an associated...

We propose a new method for the solution of the single commodity, separable convex cost network flow problem. The method generalizes the � -relaxation method developed for linear cost problems, and reduces to that method when applied to linear cost problems. We show that the method terminates with a near optimal solution, and we provide an associat...

We propose an infeasible path-following method for solving the monotone complementarity problem. This method maintains positivity of the iterates and uses two Newton steps per iteration---one with a centering term for global convergence and one without the centering term for local superlinear convergence. We show that every cluster point of the ite...

We consider a projection-type error bound for the linear complementarity problem involving a matrix M and vector q. First, we show that the Mangasarian-Ren sufficient condition on M for this error bound to hold globally, for all q such that the problem is solvable, is also necessary. Second, we derive necessary and sufficient conditions on M and q...

We propose new methods for solving the variational inequality problem where the underlying function F is monotone. These methods may be viewed as projection-type methods in which the projection direction is modified by a strongly monotone mapping of the form I - α F or, if F is affine with underlying matrix M, of the form I + αMT, with α ε (0, ∞)....

We consider the problem of evaluating a functional expression comprising the nested sums and infimal convolutions of convex piecewise-linear functions defined on the reals. For the special case where the nesting is serial, we give anO(N log N) time algorithm, whereNis the total number of breakpoints of the functions. We also prove a lower bound of...

We consider a family of primal/primal-dual/dual search directions for the monotone LCP over the space of n Theta n symmetric block-diagonal matrices. We consider two infeasible predictor-corrector path-following methods using these search directions, with the predictor and corrector steps used either in series (similar to the Mizuno-Todd-Ye method)...

We consider two merit functions for a generalized nonlinear complementarity problem (GNCP) based on quadratic regularization of the standard linearized gap function. The first extends Fukushima's merit function for variational inequality problems [Fukushima, Math. Programming, 53 (1992), pp. 99-110] and the second extends Mangasarian and Solodov's...

When the nonlinear complementarity problem is reformulated as that of finding the zero of a self-mapping, the norm of the selfmapping serves naturally as a merit function for the problem. We study the growth behavior of such a merit function. In particular, we show that, for the linear complementarity problem, whether the merit function is coercive...

We present new linear convergence results for iterative methods for solving the variational inequality problem. The methods include the extragradient method, the proximal point method, a matrix splitting method and a certain feasible descent method. The proofs of the results are based on certain error bounds related to the algorithmic mappings. Mor...

Recently, Fang proposed approximating a linear program in Karmarkar's standard form by adding an entropic barrier function to the objective function and using a certain geometric inequality to transform the resulting problem into an unconstrained differentiable concave program. We show that, by using standard duality theory for convex programming,...

An extension of the proximal minimization algorithm is considered where only some of the minimization variables appear in the quadratic proximal term. The resulting iterates are interpreted in terms of the iterates of the standard algorithm, and a uniform descent property is shown that holds independently of the proximal terms used. This property i...

We analyze a distributed asynchronous algorithm, proposed by
Tsitsiklis and Bertsekas (1986), for optimal routing in a
virtual-circuit data network. We show that, under a strict convexity
assumption on the link delay functions, the sequence of routings
generated by the algorithm converges in the space of path flows and the
convergence rate is linea...

Caption title. Includes bibliographical references (p. 25-27). Supported by National Science Foundation. DDM-8903385 CCR-9103804 Supported by the Army Research Office. ARO DAAL03-92-G-0115 by Dimitri P. Bertsekas and Paul Tseng.

We analyze the convergence of an approximate gradient projection method for minimizing the sum of continuously differentiable functions over a nonempty closed convex set. In this method, the functions are aggregated and, at each iteration, a succession of gradient steps, one for each of the aggregate functions, is applied and the result is projecte...

We analyze the rate of convergence of certain dual ascent methods for the problem of minimizing a strictly convex essentially smooth function subject to linear constraints. Included in our study are dual coordinate ascent methods and dual gradient methods. We show that, under mild assumptions on the problem, these methods attain a linear rate of co...

In this paper, we analyze the exponential method of multipliers for convex constrained minimization problems, which operates like the usual Augmented Lagrangian method, except that it uses an exponential penalty function in place of the usual quadratic. We also analyze a dual counterpart, the entropy minimization algorithm, which operates like the...

We survey and extend a general approach to analyzing the convergence and the rate of convergence of feasible descent methods
that does not require any nondegeneracy assumption on the problem. This approach is based on a certain error bound for estimating
the distance to the solution set and is applicable to a broad class of methods.
KeywordsError...

The affine-scaling algorithm, first proposed by Dikin, is presently enjoying great popularity as a potentially effective means of solving linear programs. An outstanding question about this algorithm concerns its convergence in the presence of degeneracy. In this paper, we give new convergence results for this algorithm that do not require any non-...

We give, for a class of monotone affine variational inequality problems, a simple characterization of when a certain residual function provides a bound on the distance from any feasible point to the solution set. This result has implications on the global linear convergence of a certain projection algorithm and of matrix splitting algorithms using...

Consider the problem of minimizing, over a polyhedral set, the composition of an affine mapping with a strictly convex essentially smooth function. A general result on the linear convergence of descent methods for solving this problem is presented. By applying this result, the linear convergence of both the gradient projection algorithm of Goldstei...

Consider the affine variational inequality problem. It is shown that the distance to the solution set from a feasible point near the solution set can be bounded by the norm of a natural residual at that point. This bound is then used to prove linear convergence of a matrix splitting algorithm for solving the symmetric case of the problem. This latt...

Cover title. Includes bibliographical references (leaves 5-8). Research supported by the Army Research Office. DAAL03-86-K-0171 Research supported by the National Science Foundation. NSF DDM-8903385 D.P. Bertsekas, P. Tseng.

The coordinate descent method enjoys a long history in convex differentiable minimization. Surprisingly, very little is known about the convergence of the iterates generated by this method. Convergence typically requires restrictive assumptions such as that the cost function has bounded level sets and is in some sense strictly convex. In a recent w...

We show that a square matrix M has the property that Mϰ = 0 whenever ϰTMϰ = 0 if and only if it can be decomposed into the form ETAE, for some matrix E and some matrix A that is either positive definite or negative definite.

Recently, Bertsekas and Tsitsiklis proposed a partially asynchronous implementation of the gradient projection algorithm of Goldstein and Levitin and Polyak for the problem of minimizing a differentiable function over a closed convex set. In this paper, the rate of convergence of this algorithm is analyzed. It is shown that if the standard assumpti...

Consider the problem of minimizing a strictly convex (possibly nondifferentiable and nonseparable) cost subject to linear constraints. We propose a dual coordinate ascent method for this problem that uses inexact line search and either essentially cyclic or Gauss-Southwell order of coordinate relaxation. We show, under very weak conditions, that th...

We consider the following basic communication problems in a hypercube network of processors: the problem of a single processor sending a different packet to each of the other processors, the problem of simultaneous broadcast of the same packet from every processor to all other processors, and the problem of simultaneous exchange of different packet...

Consider the problem of minimizing a convex essentially smooth function over a polyhedral set. For the special case where the cost function is strictly convex, we propose a feasible descent method for this problem that chooses the descent directions from a finite set of vectors. When the polyhedral set is the nonnegative orthant or the entire space...

In this paper, we propose a decomposition algorithm for convex differentiable minimization. This algorithm at each iteration solves a variational inequality problem obtained by adding to the gradient of the cost function a strongly proximal related function. A line search is then performed in the direction of the solution to this variational inequa...

The problem of computing a fixed point of a nonexpansive function f is considered. Sufficient conditions are provided under which a parallel, partially asynchronous implementation of the iteration x:= f(x) converges. These results are then applied to (i) quadratic programming subject to box constraints, (ii) strictly convex cost network flow optimi...

We propose a dual descent method for the problem of minimizing a convex, possibly nondifferentiable, separable cost subject to linear constraints. The method has properties reminiscent of the Gauss-Seidel method in numerical analysis and uses the-complementary slackness mechanism introduced in Bertsekas, Hosein and Tseng (1987) to ensure finite con...

## Citations

... Assume that x ∈ R n is an impulse response variable and let X ∈ R n×n be a Hankel matrix formed by the entries of x. From [29], [61]- [63], the minimum order time domain system identification problem can be formulated as, ...

... Furthermore, Chazan et al. [18] firstly proposed the asynchronous method to solving linear systems in 1969. The asynchronous models are divided into totally asynchronous and partially asynchronous [19]. In the totally asynchronous model, the delays can be arbitrary. ...

... If all BSAPs, inter-swapping sub-paths and generalized inter-swapping sub-paths can be enumerated, UE-FD-GNCP can be solved using solution algorithms proposed for solving GNCP in the literature. Among others, one approach is to reformulate the GNCP into an optimization problem by means of a merit function (e.g., Tseng et al., 1996). Moreover, linear approximation can be applied to the functions of link travel time, dwelling time and energy consumption to further improve computational efficiency. ...

... La qualité du « matching » entre un tracé électrochimique d'un matériau inconnu et celui d'un matériau de la base de données, est évidemment corrélée au choix d'une distance appropriée. L'élaboration de celle-ci nécessite à la fois l'expertise de terrain des CRs et des connaissances en topologies mathématiques que possède la HE Arc Ingénierie (Sardy et al. 2002). ...

... The selected values will be adopted also for the remaining numerical instances considered in this subsection. Fig. 3 illustrates the impact those hyperparameters have on the convergence of Algorithm 1 to an equilibrium of the underlying game (computed through a standard extragradient type method as in [42]), averaged over 20 random initialization procedures for each coefficients combination. In particular, we have considered β ∈ {0, 0.5, 1, 2, 5} and K in ∈ {0.01, 0.1, 0.2, 0.4}·K, with K = 100. ...

... For solving a class of CP problems, Nesterov presented the accelerated gradient method in a celebrated work [12]. Now, the accelerated gradient method has also been generalized by Beck and Teboulle [13], Tseng [14], Nesterov [15,16] to solve an emerging class of composite CP problems. In 2012, Lan [17] further showed that the accelerated gradient method is optimal for solving not only smooth CP problems but also general nonsmooth and stochastic CP problems. ...

... Proposition 2.1.15 ( [83]). A matrix M ∈ n×n is psd-plus if and only if M can be de- composed into the form E T AE for some matrix E ∈ r×n and some matrix A ∈ r×r of the form A = I + B with B skew-symmetric and r the rank of M . ...

... In particular, this type of analysis is the perfect tool for segmenting time series and detecting model changes in the dynamics. These methods have been recently approach by statistical tools in [19][20][21][22]. ...

... It has a long history and is extensively studied. A series of works study the convergence behavior of IG under various step size schemes and different assumptions on the problem; see, e.g., [31,45,43,44]. These works mainly provide weak convergence results (i.e., lim t→∞ ∇f (x t ) = 0) and do not provide explicit rates of convergence. ...

... s.t.: to be set to y i d j .) For solving this convex cost network flow problem, we use the -relaxation method by Bertsekas, Polymenakos, and Tseng (1997) (see also Guerriero & Tseng, 2002 ) and the FOR-TRAN code provided by these authors, except for some test problem instances with quadratic costs we employed CPLEX's quadratic programming solver instead. ...