Hidden convexity in some nonconvex quadratically constrained quadratic programming.
ABSTRACT We consider the problem of minimizing an indefinite quadratic objective function subject to twosided indefinite quadratic
constraints. Under a suitable simultaneous diagonalization assumption (which trivially holds for trust region type problems),
we prove that the original problem is equivalent to a convex minimization problem with simple linear constraints. We then
consider a special problem of minimizing a concave quadratic function subject to finitely many convex quadratic constraints,
which is also shown to be equivalent to a minimax convex problem. In both cases we derive the explicit nonlinear transformations
which allow for recovering the optimal solution of the nonconvex problems via their equivalent convex counterparts. Special
cases and applications are also discussed. We outline interiorpoint polynomialtime algorithms for the solution of the equivalent
convex programs.
 [Show abstract] [Hide abstract]
ABSTRACT: We survey research that studies the connection between the computational complexity of optimization problems on the one hand, and the duality gap between the primal and dual optimization problems on the other. To our knowledge, this is the first survey that connects the two very important areas. We further look at a similar phenomenon in finite model theory relating to complexity and optimization.  SourceAvailable from: de.arxiv.org[Show abstract] [Hide abstract]
ABSTRACT: With the help of the newly developed Slemma with interval bounds, we show that strong duality holds for the interval bounded generalized trust region subproblem under some mild assumptions, which answers an open problem raised by Pong and Wolkowicz [Comput. Optim. Appl. 58(2), 273322, 2014].Optimization Letters 10/2014; · 0.99 Impact Factor  SourceAvailable from: Marco Locatelli[Show abstract] [Hide abstract]
ABSTRACT: In this paper we discuss problems with quadratic objective function, one or two quadratic constraints, and, possibly, some additional linear constraints. In particular, we consider cases where the Hessian of the quadratic functions are simultaneously diagonalizable, so that the objective and constraint functions can all be converted into separable functions. We give conditions under which a simple convex relaxation of these problems returns their optimal values.Operations Research Letters 12/2014; · 0.62 Impact Factor
Page 1
Mathemalical Programming 72 (1996) 5163
Hidden convexity in some nonconvex quadratically
constrained quadratic programming
Aharon BenTal a.I, Mm'c Teboulle b,,,2.3
~ Faculty (![ Industrial Engineeri~ g and Mana~,en ent Technion I Israel Institute oJ Technology,
Ha([~t 32000, lsroel
h Schmd o! Mathematical Sciences, Deparmlent ~ Smtisti(s and Operations Research, TeIAviv University,
RamatAviv. TelAviv 69978, Israel
Received 12 May 1993
Abstract
We consider the problem of minimizing an indefinite quadratic objective function subject to two
sided indelinite quadratic constraints. Under a suitable simultaneous diagonalization assumption
{which trivially holds for trust region type problems), we prove that the original problem is
equivalent to a convex minimization problem with simple linear constraints. We then consider
a special problem of minimizing a concave quadratic function subject to finitely many convex
quadratic constraints, which is also shown to be equivalent to a minimax convex problem. In both
cases we derive the explicit nonlinear transformations which allow for recovering the optimal
solution of the nonconvex problems via their equivalent convex counterparts. Special cases and
applications are also discussed. We outline interiorpoint polynon~altime algorithms for the
solution of the equivalent convex programs.
Keywords. Indefinite quadratic problems: Nonconvex optimization; Duality
1. Introduction
Let Q be an n x n symmetric matrix, g and a given vectors in R", A an m x n
matrix, b c IR'", W an n • n symmetric positive definite matrix and r > 0. Consider the
" Corresponding author. Email: teboulle@math.tau.ac.il.
t This author's work was partially supported by GIE the Germanlsraeli Foundation for Scientific Research
and Development and by the Binational Science Foundation.
2 On leave from the Depamnent of Mathematics and Statistics, University of Maryland, Baltimore County,
Baltimore, MD 21228, United States.
~This author's work was partially supported by National Science Foundation Grants DMS9201297 and
DMS9401871.
00255610 @ 1996The Mathematical Programming Society, Inc. All rights reserved
SSDI 002556 I 0(95 )000208
Page 2
52
quadratic programming problem
(QP) min{~:Tgg~2gTz: A~. =b,(: a)fW(,:a)
<~r,z
Since W is assumed sy'mmetric and positive definite, without loss of
simple linear transformation, the ellipsoid constraint can be replaced by a ball constraint
{~: E LR": :;Tg ~< r}. Moreover, given a feasible point % for the linear constraint, by
using a standard reduction technique (see, e.g., [5, Chapter 10] ), expressing a general
feasible point as : = % § Zy, where Z = null(A) (the null space of A), and AZ = O,
tile linear constraints A,: = b can also be eliminated.
Quadratic programming problems are of primary importance in various applications
and arise as subproblems in many optimization algorithms. Problems of type (QP) arise,
tot example, when solving illposed problems via regularization and related eigenva]ue
perturbed poblems, see, e.g., [7,9], and in trust region methods for solving continuous
optimization problems, see, e.g., [8,12]. More recently, problems of type (QP) have
been shown to be at the heart of affine scaling type algorithms for solving linear and
quadratic programming problems, see, e.g., [16], and have been used as continuous
relaxations of combinatorial optimization problems that can then be solved via interior
point methods, see, e.g., [ t 1 ].
In lhis paper we consider two different classes of more general nonconvex quadratic
problems which include (QP) as a special case, and show that these are completely
equivalenl to solving convex programming problems.
First, we consider the problem of minimizing an indefinite quadratic objective subject
to twosided indefinite quadratic constraints. Problems of this type have recently been
studied by Stern and Wolkowicz [15] as a natural extension of the quadratic model
(QP). Under suitable assumptions, necessary and sufficient optimality conditions ['or
this nonconvex problem were derived in [15]. This surprising and interesting result
led them to establish a duality result with no duality gap, thus revealing that indefinite
quadratic problems of this type are implicit convex programs. A strong duality result for
minimizing an indefinite quadratic objective over a sphere was also recently and inde
pendently proved in [6]. In Section ~, ") it is shown that this class of noneonvex problems
is in fact completely equivalent to solving a convex minimization problem with simple
linear constraints. This result is proved by using a double duality argument which leads
us to discover the nonlinear transformation rendering the nonconvex problem to a convex
one. Next, in Section 3 we consider another class of nonconvex quadratic programming
problems which consists of' minimizing a concave quadratic lkmction subject to finitely
mayo' convex quadratic constraints. This problem is shown to be equivalent to a min
imax convex problem. This is proved using completely different arguments and based
on some double minimizazion reformulations of the problem. In both cases we derive
the expticil nonlinear transformations which allow for recovering the optimal solution
of the nonconvex problems via their equivalent convex counterparts. We note that as
a direct byp,oduct of our new convex reforinulations, both problems can be solved
in polynomial time via interiorpoint methods such as the ones given, for example, in
[13,16].
A. BellTo:. M. Tebouile/Mathema:ic~U Prog:otnmin,~ 72 (t996) 5163
e I~"}.
generality and by a
Page 3
A. BenTal, M. Teboulle/Mathematical Programming 72 (1996) 5163
2. Indefinite quadratically constrained problems
53
Consider the quadratically constrained quadratic problem
(Q) min{q(z) :=zTQz  2gTz: l~ zTMz <~u,Z E R"},
where Q and M are n x n symmetric indefinite matrices, g E R" and l and u, with
l ~< u, are given lower and upper bounds. Note that, when l = 0 < u and M is a positive
definite scaling matrix, the constraint set is convex, and (Q) is a simplified version of
the model (QP). Problems of this type arise in trust region methods, in particular with
M = 1 and where u > 0 is the trust region radius. Another interesting special case is
when l = u and M = 1, which leads to eigenvalue perturbation type problems (see, e.g.,
19,151):
min{zTQz  2gTz : Ilzll = Z,z ~ R"},
where I1 II denotes the /:norm.
To analyze problem (Q), we first transform it to an equivalent more appropriate form.
To achieve this task, we need first to recall a basic matrix analysis result on simultaneous
diagonalization, see [10]. The notation M > 0 (M /> 0) means that the real symmetric
matrix M is positive definite (positive semidefinite).
Theorem 1 (Horn and Johnson [ 10, Theorem 7.6.4] ). Let A, B E R ''x" be two sym
metric matrices and suppose that there exist ce,/3 C R such that
ceA+ ~B > O.
Then there exists a nonsingular matrix C C R ''x'' such that both CT AC and CT BC are
diagonal.
Remark 2. If one of the matrices above is positive definite, say, for example, A, then
we can have CTAC = 1 (see [ 10, Corollary 7.6.5] ).
In the rest of this section we make the following blanket assumption with respect to
problem (Q).
Assumption. (a) Problem (Q) is feasible.
(b) The Simultaneous Diagonalization holds, i.e.,
(SD) 3o: C R such that Q + c~M > 0.
Note that according to Finsler's Theorem [4], a sufficient condition for Assumption
(b) to hold is that
xTQx > 0, for all x 4 = 0 such that xTMx = O.
From Assumption (b), invoking Theorem 1, we have that there exists a nonsingular
matrix C such that
Page 4
54
A. Ben~d, M. l~,boulle/Mathematical Programming 72 (1996) 5163
CVQC=D:=diagcdl ..... d,,), d: 6 K. j= 1 ..... n.
(1)
CqMC = S:=diag(sl ...... %),
s~ 6 R. j= 1 ..... n.
(2)
Then. using the transformation z := Cx. defining c = CTg and using
problem (Q) can be rewritten as
1) and (2),
(Qr
min{qc(x) := .v~D.~ 
2cT.v: I ~ xrsx ,< u,x 6 ~"}.
We shall be working with this reformulation. Hence, in the rest of this section we
now assume that problem (Qc) is feasible and that the simultaneous diagonalization
condition holds, namely,
(SD)
~rl 6 ~.: D + rlS > 0. (3)
The next result gives a dual representation of problem (Qc). Another dual formulation
was also derived in [ 15]. with a different proof, and in 16] for the simpler problem with
a sphere constraint. We denote by [s the set of vectors with nonnegative components,
and by ~:" + the set of vectors with strictly positive components.
Lemma 3. A kagrangian dual of (Qc) is o.iven by the twodimensional concave dual
problem
sup {#I
#a.~>0
( DQc )
such that
c5
}
'"'  ~
d, * (~ #)si '
d/+(u#).si>O,
j~ J~.
,,,here J,~ = {.J s [I,,~]: c~ =0}, ./~ = {g ,~ [ I,.l: ci r 0}.
Proof. The Lagrangian of problenl (Qc) is
L(x, #,/:) = xTDx  2cTx + /z(/ xTSx) + 1:(xTsx  u)
and the dual problem is the concave maximization problem given by
sup{h(tz, p): (,u.,u) 6 K+ x ~, Ndonlh},
where h(/z,u) := inr{L(x,#,.):
Denote
x e R"} and domh := {(#,,:): h(/~,v) > ~}.
T.i := d.i 4 ( l: /,)si,
.j = 1 ..... n,
and note that
H
inrL(x, #, ,,) = ~ inr{r/.v:,  2c,.v:} 4 #:  ,,,,.
.j=l x:
Page 5
A. BenTal, M. Tehold&/Mothemalictd Programming 72 (t996) 5163
55
Now for each j r ,ll, we have
int'{yi.v? 2ci):.,.} =
.,.,
yT'c),
x:,
if y > (L
it" T.i ~ O,
i
while, for j 5 Jo,
inr,i,/S,  >,./.,,} = inr{~,,.Y~} = { o.
,,
it '),, > o,
ifyi
" i .,, ~T,::.
< O,
Hence,
E
.JE Jj
c]
p.{ uu if >0. j ~,l~ and7))0, j~d0,
h(#.u) = Yii" 7
oo, otherwise,
and therefore the dual problem is (DQc) as stated in Lemma 3. []
Remark 4. Note that the sup in the dual problem (DQc:) needs not be attained. The
condition under which the attainment is guaranteed will be given in Lemma 6 below.
From standard duality we always have the weak duality result
inf(Qc) /> sup(DQc).
To obtain a strong duality result, namely equality, we need convexity and some constraint
qualification [ 14]. Problem (Qc) heing nonconvex does not appear to be a candidate.
However, since the dual problem (DQc,) is by construction always concave, one can still
derive a strong duality result for it. Usually, by taking the dual of a dual problem one can
expect to obtain the original prhnal, if once again we have convexity and some constraint
qualiticalion, it turns out that the dual of the dual of (DQc) is not the original nonconvex
primal (Qc) (that would be impossible!), but leads to another convex problem, which
will in turn be shown to be completely equivalent to the nonconvex primal (Qc), thus
revealing the hidden convex nature of problem (Qc).
Lemma $. A Lagmngicut dual qt" (DQc) is given by the linearly constrained convex
minimization problem
(DDQc)
inr Jm  21cji vC: t <. s' v <. ~,. v ~ ~ f j.
j=l
Proof. The dual problem (DQc) can be equivalently written as
(DQc) sup
#l  l,u  " #, u >~ O, t / 6_ R, j C Jl
t i
i
j,7 J
" "
subject to
Page 6
56
.4. BenThl, M. Tel~uulle/Matheman'cal P,o~,ran , b g 72 (1996) 5163
d~+(t'#)si=t
dj+(~,/t).
t i > 0,
J, jE,ll,
jC Jo,
U ~>0,
j 6: J!.
Let yj ~ ~., j C dl, be the dual multiplier associated with the first linear equality
constraint, and let yj ~ 0, j ~ ,In, be the dual multiplier associated with the second
linear inequa ty constraint. The dual problem of (DQc) is
inf{dry~c(y)+ sup{#(lsrv)+u(,s"ryu)}:
,~.,>>o
R.j ~ ,I], Yi ~> 0,j ~ J~,
, "
~ (4)
.vj
)
where
u(.v) := y sup{tiy i
i6]l h "O
c.~t;' }. (5)
The computation of the inner maximization in (4) w'ith respect to tx and u leads to the
oplinml value zero with the linear constrain I ~ jry ~ u, and +oc otherwise. Now, for
j c ,h,
.),
NUp{ I.V f _C_t[ ]'~ 1 }=
_,C'i[~Y j,
it" 3', > 0.
,o
"
+~, if Yi < O.
Substituting these two computations in (4). we get (DDQc), []
The next lemma gives simple conditions for tmainment of the infimum and supremum
for problems (DDQc) and (DQc), respectively.
Lemma 6. Under the simultaneous diagonalization condition (SD), the it!fimum in
prol~lem (DDQc) is attained,/br some J'easible y'. Moreover, under the Slater~O'pe
condition
~y > 0: I ~ sTy ~ l/,
the .~upremum in problem (DQc ) is attained.lot some feasible pair (IX*, u*),
(6)
Proof. We use the Fenchel duality technique to prove the result. For that purpose, let
f(y) := dTv > 8( 3, [ I ~ sry <~ u) and g(y) := 23~'!=[ Icj[~
&(' i C) denotes the indicator function of a set C, i.e., is equal to 0 if x E C and §
olherwise. Clearly, f and g are closed proper convex functions and problem (DDQc)
C~.ln be written as
 8(3' I R'~), where
i nf{f(y)d(y)"
vCdomfCldonlg}.
Invokhlg Fenchel's duality theorem [14, Tlleorem 31,1, p. 327], one has
ira{f (y)  g(>,) } = sup{S (.v)  .t ~ (.v)}
v r
(7)
Page 7
A. BenTal, M. Teboulle/Mathematical Programming 72 (1996) 5163
57
if either of the following conditions is satisfied:
(i) domfNridomg
( i i ) ri dora f* rq ri dom g~ 4: ~.
Under (i) the supremum is attained at some x, while under (ii) the infimum is
attained at some v. Here, ./"* denotes the conjugate of f and ri stands for relative
interior. (Recall that since here f is polyhedral, the relative interior of the domain of f
can be replaced by the domain of f in (i).)
We now compute the corresponding conjugates of f and g. Using linear programming
duality, we obtain
4= ~;
.f*(x) = sup {>,Vx dVv}
I <<.sJ r<~ u
= inf {/~/+,vu: x=d+(u /z)s}.
Using the definition of g(y), we have
Ii
g"(_r) = inf{.rTy g(y)} = Z
.~>0
inf {x,yj  2 cjl
j=l )?'#0
( 8 )
Now, for j ~ JI,
inr
.,,~o
 = [ c)/xj,
L oo,
if
if x i <. O,
> 0,
while, for .j ~ J0,
inf{.rjy.j 21c/l~yi} = inf{.r.D'.i} = { o,
if .rj ~> O,
ifx.j <0.
Therefore,
o
 C.7
g*(v)= ~"
J~Jt
x./ '
i
if x./ > 0 and ,j C JI,
otherwise.
xj ) 0 and j r J0,
Since ridomg* = N++ and
dom.f"={xER": x=d+(v#)s, for some #, v >~ 0}
= {x C R": x = d + As, for some 3. E R},
condition (ii) is equivalent to ~A r R: D + AS > 0, and the infimum in (17) is attained
for some feasible y*, proving the first pmt of the lemma. Similarly, from the definition
of f and g we have that condition (i) above is equivalent to the Slatertype condition
(6) and the supremum in (7) is attained at some feasible /x*,u * ~> 0, proving the
second part of the lemma. Finally. a direct substitution of f*, g* in (7) shows that the
suprmmun problem in (7) is exactly the dual problem (DQc). []
Page 8
58
,4. BeI17)U. M 7bb,>ldle/Mathematl~at t'ro.i,,ramming 72 (1996)5163
The double duality approach has just revealed that the indefinite quadratic primal is
equivalent to a convex program via line simple transformation
.v~ =sgricl,,/3) ~, j= I ..... ,, (9)
where sgnc i= 1 i['c / > 0.()if% =(land I ifci <0.
Theorem 7.. U,der the blanket a vsumt)Hon, the bMc{finite quadratic program (Qc)
is equivalent to tire' com'ex program (DDQ(). More precisely, there exists an optimal
solutiotl v" (~/ (DDQ(,), for which a c'orpv,~7Jonding oplimal solution of ( Qc ) is given
DI
aild we have inf(Qc) = min(DDQc I.
Proot: The proof follows innnedialely by combining Lemmas 3. 5 and 6. Indeed, we
have
inl'( Q(7 ) ;e sup( DQc, ) = rain( DDQc. ).
:
/1' ~ have
.t ....
But with .x" I
dd(y <) = min(DDQc), where dd(y) denoles the objective function in (DDQc).
sgnC, V.i. we feasible for Qc) arid q{.(..v*) = min(Qc) =
[]
As an immediate consequence of Theorem 7, we can then formulate trust region type
problems and perturbed eigenvalues problems as equivalent convex minimization prob
lems with simple linear consiraints. Note that for this special case, the (SD) condition
redtices to: there exists an ce ~2 7"4 such that D + c,l > 0, which always holds. Moreover,
the Slatertype condition in Lemma 6 also holds trivially.
Corollary 8. The imtefi,ute t,ust region ryl,e [,roblem mi,umize {zWOz  2gTz: II= l[
r, z ~ %" } is equivalent to the' co#lve'.v problem
{ • ,•
rain U .ri  2lc,[~@: y/~ ,,3, <~ R'{_
j=l ~=1
}
Proof. Apply Theorem 7 io the special case with l=0, u=r and M=I. []
Smlilmly, applying Theorem 7 to the special case I = st, we obtain the equivalent
convex formulation for eigenvalue perturbation problems, with the linear inequality
constraint in Corollary 8 replaced by a linear equality.
Finally, we mention that anolher byproduct of Theorem 7 is that necessary and
skiflicient optimality conditions for problem (Q) can be derived by applying the Karush
KuhnTucker optimality conditions on the com'ex formulation (DDQc) in conjunction
with the use of the nonlinear transformation (7) and the relations (1) and (2). These
Page 9
A. BenTal, M. Teboulle/Mathematical Programnffng 72 (1996) 5163
59
optimality conditions have been also derived directly in [15] and in [6,16] for the
ballconstrained case.
Next. we briefly describe a polynomialtime interiorpoint method to solve the equiv
alent convex problem
(DDQc) minimize
(d./yj 21c/I ~): / ~ sty ~ ", Y C R", ,
=
based on [ 13l.
First we imroduce n additional variables w.i, denote z = (y,w) and rewrite the
problem equivalently as
l?
minimize J:rz = ~(d.iY/
2]c'./Iw.:) such that z C G, (10)
./=1
where
c = {: = (.v. w) Iw~  y/~< 0,j = 1 ..... ,,,l ~< sTy ,< u}.
The domain G admils an explicit (n + 2)selfconcordant barrier
FI~.)~(y.i w.~)ln(usTy)In(sry1).
i=1
In order to solve the problem via an interiorpoint method associated with this barrier,
we need G to be bounded; assume that it is the case. The assumption means exactly that
all coordinates of s differ from 0 and are of the same sign; if it is the case (it surely
holds for the trust region problem), we may assume, without loss of generality, s > 0,
and. consequently, u > 0 and l ~> 0.
To start the method, let us find the analytic center of G, i.e., the minimizer of F over
intG; the center is given here analytically:
I
if'i=0.
5;/ =  , J = I ..... n,
c~s j
where a' is the (unique) root of the equation
tdcr ~  (n + 1)(u + l)ce + n(n + 2) =0,
satisfying u > n/o: > l.
The method, in its basic (shortstep) form, generates the sequence of pairs (ti, Zi)
according to the following rule:
ti I = I q ti, 2i+1 = Zi [./"'(2i)]I(ti+lf +f/(gi)),
where K is an appropriately chosen positive absolute constant, z0 = :7 is the analytic
center of G and to is given by
K
/'l) ~
\/Jql F"(.5) ]~f '
Page 10
60
A. BenTal. M. Teboulle/Mathematical Programming 72 (1996) 5163
For any e ~ (0, 1), this method generates an esolution to the problem, i.e., finds zi
~uch that
,T:
,/' z, rain ~ e .fT~ min.fT ,
G G
,
( )
in no more than O(1)nln(2n/e)
steps.
3. Concave quadratic objective with convex constraints
We now consider the minimization of a concave quadratic objective subject to finitely
many convex constraints. Let Pk, k = 1 ..... l, be a collection of vectors in R" and let
A~, i = 1 ..... m, be a collection of n x n symmetric positive semidelinite matrices. We
consider the nonconvex problem of finding vectors z~ C R", k = 1 ..... l, solving
(c)
, , }
{ ~'~ 1,.,'I'_ ,2. ~~2.2Aig.k ~ 1 i
~_ttka,~s i
k=l k=l
1 m
inf
zk C_I~"

, =
.....
Problems of this type arise in the context of structural optimization, particularly truss
topology design problems, see, e.g., [ 1 ]. Other examples of problems with finitely many
convex quadratic constraints also arise in some generalizations of trust region methods,
see. e.g.. [ 17].
Problem (C) is nonconvex since it consists of minimizing a concave objective subject
tO convex constraints.
To avoid uninteresting cases, we make the following assumption.
Assumption. rain(C) < 0, where rain(C) denotes the optimal value of problem (C).
The above assumption eliminates the trivial solutions. Indeed, if the above does not
hold, then za = 0 is optimal tbr all k.
We prove below that problem (C) can be transformed to an equivalent convex pro
gramming problem. Before doing so, we start with a simple and useful technical result.
Lemma 9. Let a > 0 and cr • R" be given. Then,
(i) min~>0{r  v/~ara} =  s_a,l with minimizer r* = 7 a , l
(ii) min{sTa: Ilsjl 2 = 1,s c iF 't} = !lalt :=
I1~11'o,.
.
(~z~_,/k=l a~.) 1/2, with minimizer s* =
Proof. (i) follows by simple calculus and (ii) is an immediate consequence of the
CauchySchwartz inequality.
Theorem 10. The nonconvex quadratic program (C) is equivalent to the smooth convex
program (C4) given below, which in turn is equivalent to the convex minimax problem
Page 11
A. BenTal, M, Teboulle/Mathematical Programming 72 (1996)5163
61
; 
J
(.,,,a) c R" • A ,
(C5) rain max ~
whe,e ,,1 := {A ~ R': ~k=l Ak = l, Ak ) 0}. /f (y~, A*) is at1 optimal solution of (C5),
then the optimal solution of(C) is given by
i i
{,•
I/2.., k= 1
,l,
z~=(2r a k) .y~ . . . . .
where
i Yk Aiyk
T = max
k= I
Proof. Tile equivalence of problem (C) to a convex problem is established via the
following four steps involving double minimization reformulations and nonlinear trans
fl~rmations.
StET~ 1. Rewrite C) as a double minimization problem, with the additional variable
r > O:
}
(Ct)
'
rain mm
z, ~i?,."
r z = i
r>0 /,'=1
k=l
The equivalence of (CI) to (C) is obtained by solving the inner minimization problem
in r, which by Lenmla 9(i) with a := ~k(p~'Zk) 2 gives the objective function of (C)
with the optimal r*:
i q' 2
~: = (Pa zk ) i
k
Step 2. Replace the variables Zk in (Cl) by the new variables xk:
1
Zk:Xk, k= 1 . 1, (12)
to obtain a new equivalent problem
minmin [r_v~'
.,,ER" ~.>0
~)"5 )2: T .
(C~)
k (p~'xk 2xkAi'~k <~ r,i = 1 .... ,n .
k=l
Step 3. Rewrite (C2) as a double minimization problem with the additional variables
sk ~ R, k = l ..... I:
I I I "1
)
rain min rain/[r
r :> 0 .~ ~: C ~I.P' .s'~ E !R I
Zsk(p~xk):
k= l
~'
k= 1
T . ~
k= I
(C3)
~"gk Ai'~'k ~ 7", i = 1 .... m, s k = 1 .
Invoking Lemma 9(ii), the inner minimization problem has the optimal solution
l?kr.v~
s/~ = k = 1 ..... l, (13)
~//~_& ( plrxk ) 2'
Page 12
62
A. BetlF~;I. M. Tel?oulle/Mathematical l'roA, rammi;2g 72 (1996) 5163
with optimal value reducing to the objective function of (C2).
Step 4. Deline the new variables Ak C R.. Yk C LR", k = 1 ..... l:
 
Yk
r
s~ = x,//aa.,
vk  ~ .v;: = .s'~x~. (14)
Note that if A~ = 0 for some k, then set .vk = O. Indeed, with kk = 0 we obtain ,r~ = 0,
and for ,s~ = 0 in problem (C3) it is optimal to choose ,va = 0. Using the transformation
(12) in (C3), we oblain lhe new equivalent problem
(C4) n3in r [~k .~ ~ ~
"~
A~ ~ r.i= 1 ....
m ,
~,C~c:'.r:O,AC I k
k=l
where I:={k< R t Zk=13,k = 1,Ak ~0}.
I
~ '~]~k=l T
l
= ~ 3'k Ai3'k,/2tk
Problem (C4) is a co;n,e:r program
is jointly convex on I~y • .1. This fotlows since the quadratic form corresponding to
this function is calculaled to ,.zive
since the function J;(y, A)
I ~ . s I i,~ 6aA}/_,3,k e,
(d,b) V'.i (v,A)(d,/~) = A7 '4;'d~ Ak '
k=l
which is clearly nonnegative for all d E L~ ''/, {$ ,E F'~/.
Problem (Ca) can be further reduced to the equivalent convex minimax problem
,c,, '
. /?~3'a: (Y,A)
~
xA .
I ~<i~<m "
k
If (y2".A;~) is an optimal solution of problem (Cs), then r ~
and, using (10) and (12). the optimal solution of problem (C) is given by
maxl.<,.<,,fi '* *
/2 3
z[ =(2r*A~) y,, k= I ..... 1,
and the proof is completed.
Remark 11. One referee pointed out that problem (C) might be transfommd to some
kind of homogeneous program, on which one could then perhaps apply results for
this type of programs, see, e.g., 131. This could be an interesting alternative to study
prohlem (C). However, we note that the proof we have provided is selfcontained and
also reveals an interesting mechanism that could be useful in other contexts.
As an interesting special case, with Ai  l, i = I ..... m. and replacing the objective
function of (C) by an arbitrary (i.e., not necessarily in dyadic form) positive semi
delinite matrix P, we obtain that the problem of minimizin,, a concave quadratic function
_ :'rpz over a ball const,aint is equivalent to solving an unconstrained convex problem.
This provides another interesting explanation as why such problems me so tractable and
can be solved efliciently in polynomial time, as recently shown in [16]. In particular,
Page 13
A. BenTal. M. Teboulle/Mathematical Programming 72 (1996) 5163
63
for problem (C5), an explicit interiorpoint pathfollowing polynomialtime algorithm
is given in [2], which solves this problem in O(v~+
requi!'ing O ((m + nil n21) arithmetic operations.
1) Newton steps, each step
Acknowledgements
We thank two referees for their detailed comments which helped us to improve the
results of this paper.
References
[ I ] W. Achtziger, Minimax compliance truss topology subjec! to multiple loading, in: M.P Bendsoe anti C.A,
Mota Soares, eds., Tot~olo&v Design (~f Structures ( Kluwer, Dordrecht, 1993) 4354.
[ 2 ] A. 13enTal and A. Nemirovskii. Interior point polynomial time method for truss topology design, Research
Repor! 3/92, Optimization Laboratory, Technion Israel Institute of Technology (1992).
[ 3 ] E. Eisenberg, Duality in bomegeneous programming, Proceedings of the American Mathematical SocieO,
12 ( 1961 ) 783787.
141 R Finsler. Llber das Vorkommen definiter und semideliniter Formeu in scharen quadratische Formen,
C, mmentarii Matematici Helvelici 9 ( 1937 ) 188192.
} 51 R Fletcher, f'ractical Methods of Optimization ( Wiley. Chichester, 2nd ed., 1987 ).
16] O.E. Flippo and B. Jansen, Duality and sensitivity in quadratic optimization over a sphere, Technical
Report 9265, Technical University of Delft (1992).
171 W, Gande., Least squm'es with a quadratic constraint, Numerische Mathematik 36 ( 1981 ) 291307.
18I DM. Gay, Computing optimal locally constrained steps, SlAM Journal on Scientific and Statistical
(?tmqmtins 2 ( 1981 ) 186197.
] 91 G.H. Golub and U. Von Matt, Quadratically constrained least squares and quadratic problems, Numerische
M~thematik 59 (1991) 561580.
I 0 ] R.A. Horn and C.R. Johnson, Matrix Analysis ( Cambridge University Press, New York, 1985).
I1] E Korner. A tight bound for the boolean quadratic optilnization problem and its use in a branch and
botmd algorithm, Ot)timizatim~ 19 (1988) 711721.
I21 J.J. Mord, Recent developments in algorithms and software for trust region methods, in: A. Bachem,
M. Gr0tscheI and B. Korte. eds., Mathematical Programming. the State ~ the Art (Springer, New York,
1983) 268285.
13[ Y. Ncsterov and A. Nenlirovskii, Se{fiConcordant Functions and Polynomial Time Algorithms (CEM1,
Moscow, 1989).
14[ R.T Rockafellar, Convex Analysis (Princetou University Press, Princeton. N J, 1970).
151 R.J. Stern and H, Wolkowicz, Indefinite trust region subproblems and nonsymmetric eigenvalue
perturbations, SIAM Jottrnal on Optimization 5 ( 1995 ) 286313.
161 Y. Ye, On aftine scaling algorithms for noncom,ex quadratic programming. Mathematical Programming
56 (3) (1992) 285 300,
17[ Y. Yuan, On a subproblem of trust region algorithms for constrained optimization, Mathematical
Pro,~ramming 47 ( 1 ) ( 19901 5363.