ArticlePDF Available

Optimality Conditions For Optimization Problems With Complementarity Constraints

Authors:

Abstract

. Optimization problems with complementarity constraints are closely related to optimization problems with variational inequality constraints and bilevel programming problems. In this paper, under mild constraint qualifications, we derive some necessary and su#cient optimality conditions involving the proximal coderivatives. As an illustration of applications, the result is applied to the bilevel programming problems where the lower level is a parametric linear quadratic problem. Key words. optimization problems, complementarity constraints, optimality conditions, bilevel programming problems, proximal normal cones AMS subject classifications. 49K99, 90C, 90D65 PII. S1052623497321882 1. Introduction. The main purpose of this paper is to derive necessary and su#cient optimality conditions for the optimization problem with complementarity constraints (OPCC) defined as follows: (OPCC) min f(x, y, u) s.t. #u, #(x, y, u)# = 0, u # 0, #(x, y, u) # 0 (1.1) L(x, y, u) = 0, g(x, y, u) ...
OPTIMALITY CONDITIONS FOR OPTIMIZATION PROBLEMS
WITH COMPLEMENTARITY CONSTRAINTS
J. J. YE
SIAM J. OPTIM.
c
1999 Society for Industrial and Applied Mathematics
Vol. 9, No. 2, pp. 374–387
Abstract. Optimization problems with complementarity constraints are closely related to opti-
mization problems with variational inequality constraints and bilevel programming problems. In this
paper, under mild constraint qualifications, we derive some necessary and sufficient optimality con-
ditions involving the proximal coderivatives. As an illustration of applications, the result is applied
to the bilevel programming problems where the lower level is a parametric linear quadratic problem.
Key words. optimization problems, complementarity constraints, optimality conditions, bilevel
programming problems, proximal normal cones
AMS subject classifications. 49K99, 90C, 90D65
PII. S1052623497321882
1. Introduction. The main purpose of this paper is to derive necessary and
sufficient optimality conditions for the optimization problem with complementarity
constraints (OPCC) defined as follows:
(OPCC) min f(x, y, u)
s.t. hu, ψ(x, y, u)i = 0, u 0, ψ(x, y, u) 0(1.1)
L(x, y, u) = 0, g(x, y, u) 0, (x, y, u) ,
where f : R
n+m+q
R, ψ : R
n+m+q
R
q
, L : R
n+m+q
R
l
, g : R
n+m+q
R
d
,
and is a nonempty subset of R
n+m+q
.
(OPCC) is an optimization problem with equality and inequality constraints.
However, due to the complementarity constraint (1.1), the Karush–Kuhn–Tucker
(KKT) necessary optimality condition is rarely satisfied by (OPCC) since it can be
shown as in [9, Proposition 1.1] that there always exists a nontrivial abnormal multi-
plier. This is equivalent to saying that the usual constraint qualification conditions,
such as the Mangasarian–Fromovitz condition, will never be satisfied (see [8, Propo-
sition 3.1]). The purpose of this paper is to derive necessary and sufficient optimality
conditions under mild constraint qualifications that are satisfied by a large class of
OPCCs.
To motivate our main results, we formulate problem (OPCC), where = R
n+m+q
,
as the following optimization problem with a generalized equation constraint:
(GP) min f(x, y, u)
s.t. 0 ψ(x, y, u) + N (u, R
q
+
),(1.2)
L(x, y, u) = 0, g(x, y, u) 0,
where
N(u, C) :=
the normal cone of C at y if u C,
if u / C
Received by the editors May 26, 1997; accepted for publication (in revised form) May 4, 1998;
published electronically March 17, 1999. This work was supported by the Natural Sciences and
Engineering Research Council of Canada and a University of Victoria internal research grant.
http://www.siam.org/journals/siopt/9-2/32188.html
Department of Mathematics and Statistics, University of Victoria, Victoria, BC V8W 3P4,
Canada (janeye@uvic.ca).
374
OPTIMIZATION PROBLEMS WITH COMPLEMENTARITY CONSTRAINTS 375
is the normal cone operator in the sense of convex analysis.
Let (¯x, ¯y, ¯u) be a solution of (OPCC), where = R
n+m+q
. If N(u, R
q
+
) were
single-valued and smooth, then the generalized equation constraint (1.2) would re-
duce to an ordinary equation. Using the KKT condition, we could deduce that if a
constraint qualification is satisfied for (GP) and the problem data are smooth, then
there exist KKT multipliers ξ R
l
, ζ R
d
, η R
q
such that
0 = f (¯x, ¯y, ¯u) + L(¯x, ¯y, ¯u)
ξ + g(¯x, ¯y, ¯u)
ζ
−∇ψ(¯x, ¯y, ¯u)
η + {0} × {0} × N
R
q
+
(¯u)
η,
0 = hζ, g(¯x, ¯y, ¯u)i, ζ 0,
where denotes the usual gradient, M
denotes the transpose of the matrix M, and
N
C
denotes the map y N(y, C). However, u N(u, R
q
+
) is in general a set-valued
map. Naturally, we hope to replace N
R
q
+
(¯u)
η by the image of some derivatives
of the set-valued map u N(u, R
q
+
) acting on the vector η. The natural candidate
for such a derivative of set-valued maps is the Mordukhovich coderivative (see Defini-
tion 2.3) since the Mordukhovich coderivatives have a good calculus, and in the case
when the set-valued map is single-valued and smooth, the image of the Mordukhovich
coderivative acting on a vector coincides with the usual gradient operator acting on
the vector (see [6, Proposition 2.4]). Indeed, as in [7], we can show that if (¯x, ¯y, ¯u) is
an optimal solution of (OPCC) and a constraint qualification holds, then there exist
ξ R
l
, ζ R
d
, η R
q
such that
0 f(¯x, ¯y, ¯u) + L(¯x, ¯y, ¯u)
ξ + g(¯x, ¯y, ¯u)
ζ
−∇ψ(¯x, ¯y, ¯u)
η + {0} × {0} × D
N
R
q
+
(¯u, ψ(¯x, ¯y, ¯u))(η),
0 = hζ, g(¯x, ¯y, ¯u)i, ζ 0,
where D
denotes the Mordukhovich coderivative (see Definition 2.3). Recall from [7,
Definition 2.8] that a set-valued map Φ : R
n
R
q
with a closed graph is said to be
pseudo-upper-Lipschitz continuous at (¯z, ¯v) with ¯v Φ(¯z) if there exist a neighbor-
hood U of ¯z, a neighborhood V of ¯v, and a constant µ > 0 such that
Φ(z) V Φ(¯z) + µkz ¯zkB z U.
The constraint qualification for the above necessary condition involving the Mor-
dukhovich coderivative turns out to be the pseudo-upper-Lipschitz continuity of the
set-valued map
Σ(v
1
, v
2
, v
3
) := {(x, y, u) : v
1
ψ(x, y, u)+N(u, R
q
+
), L(x, y, u) = v
2
, g(x, y, u)+v
3
0}
at (¯x, ¯y, ¯u, 0). This constraint qualification is very mild since the pseudo-upper-
Lipschitz continuity is weaker than both the upp er-Lipschitz continuity and the pseudo-
Lipschitz continuity (the so-called Aubin property). However, the Mordukhovich
normal cone involved in the necessary condition may be too large sometimes. For ex-
ample, in [7, Example 4.1], both (0, 0) and (1, 1) satisfy the above necessary conditions,
but only (1, 1) is the unique optimal solution. Can one replace the Mordukhovich
normal cone involved in the necessary condition by the potentially smaller proximal
normal cone? The answer is negative in general, since the proximal coderivative as
defined in Definition 2.3 usually has only a “fuzzy” calculus. Consider the following
376 J. J. YE
optimization problem:
min y
s.t. y u = 0, yu = 0, y 0, u 0.
The unique optimal solution (0, 0) does not satisfy the KKT condition but satisfies
the necessary condition involving the Mordukhovich coderivatives. It does not satisfy
the necessary condition with the Mordukhovich normal cone replaced by the proximal
normal cone. This example shows that some extra assumptions are needed for the
necessary condition involving the proximal coderivatives to hold. In this paper such a
condition is found. Moreover, we show that the proximal normal cone involved in the
necessary condition can be represented by a system of linear and nonlinear equations,
and the necessary optimality conditions involving the proximal coderivatives turn out
to be sufficient under some convexity assumptions on the problem data.
Although the optimization problems with complementarity constraints are a class
of optimization problems with independent interest, the incentive to study (OPCC)
mainly comes from the following optimization problem with variational inequality
constraints (OPVIC), where the constraint region of the variational inequality is a
system of inequalities:
(OPVIC) min f(x, y)
s.t. y S(x), g(x, y) 0, (x, y) ,
where f : R
n+m
R, is a nonempty subset of R
m+n
and S(x) is the solution set
of a variational inequality with parameter x; i.e.,
S(x) = {y R
m
: ψ(x, y) 0 and hF (x, y), z yi 0 z s.t. ψ(x, z) 0},
where F : R
n+m
R
m
and ψ : R
n+m
R
q
. The recent monograph [4] by Luo, Pang,
and Ralph has an extensive study for (OPVIC). The reader may find the references
for the various optimality conditions for (OPVIC) from [4].
(OPCC) is closely related to OPVICs and bilevel programming problems. Indeed,
if ψ is C
1
and quasi convex in y and a certain constraint qualification condition holds
at ¯y for the optimization problem
min hF (¯x, ¯y), zi s.t. ψ(¯x, z) 0,
then by the KKT necessary and sufficient optimality condition, (¯x, ¯y) is a solution
of (OPVIC) if and only if there exists ¯u R
q
such that (¯x, ¯y, ¯u) is a solution of the
following optimization problem:
(KS) min f(x, y)
s.t. hu, ψ(x, y)i = 0, u 0, ψ(x, y) 0,
F (x, y) +
y
ψ(x, y)
u = 0,
g(x, y) 0, (x, y) ,
which is a special case of (OPCC).
In the case where F (x, y) =
y
h(x, y), where h : R
n+m
R is differentiable and
pseudoconvex in y, (KS) is equivalent to the following bilevel programming problem
(BLPP), or so-called Stackelberg game:
(BLPP) min f(x, y)
s.t. y S(x), g(x, y) 0, (x, y) ,
OPTIMIZATION PROBLEMS WITH COMPLEMENTARITY CONSTRAINTS 377
where S(x) is the set of solutions of the problem (P
x
):
(P
x
) minimize h(x, y) s.t. ψ(x, y) 0.
We organize the paper as follows. Section 2 contains background material on
nonsmooth analysis and preliminary results. In section 3 we derive the necessary and
sufficient optimality conditions for (OPCC). As an illustration of applications, we also
apply the result to (BLPP), where the lower level is a linear quadratic programming
problem.
2. Preliminaries. This section contains some background material on non-
smooth analysis and preliminary results which will be used later. We give only concise
definitions that will be needed in the paper. For more detailed information on the
subject, our references are Clarke [1, 2], Loewen [3], and Mordukhovich [6].
First we give some concepts for various normal cones and subgradients.
Definition 2.1. Let be a nonempty subset of R
n
. Given ¯z cl, the closure
of set , the convex cone
N
π
(¯z, Ω) := {ξ R
n
: M > 0 s.t. hξ, z ¯zi M kz ¯zk
2
z }
is called the proximal normal cone to set at point ¯z, and the closed cone
ˆ
N(¯z, Ω) := { lim
i→∞
ξ
i
: ξ
i
N
π
(z
i
, Ω), z
i
¯z}
is called the limiting normal cone to at point ¯z.
Remark 2.1. It is known that if is convex, then the proximal normal cone
and the limiting normal cones coincide with the normal cone in the sense of convex
analysis.
Definition 2.2. Let f : R
n
R {+∞} be lower semicontinuous and finite at
¯z R
n
. The limiting subgradient of f at ¯z is defined to be the set
ˆ
f (¯z) := {ζ : (ζ, 1)
ˆ
N(¯z, epi f)},
where epi f := {(z, v) : v f(z)} denotes the epigragh of f.
Remark 2.2. It is known that if f is a convex function, the limiting subgradient
coincides with the subgradient in the sense of convex analysis. For a locally Lipschitz
function f, f = co
ˆ
f (x), where denotes the Clarke generalized gradient and co
denotes the convex hull. Hence the limiting subgradient is in general a smaller set
than the Clarke generalized gradient.
For set-valued maps, the definition for limiting normal cone leads to the definition
of coderivative of a set-valued map introduced by Mordukhovich (see, e.g., [6]).
Definition 2.3. Let Φ : R
n
R
q
be an arbitrary set-valued map (assigning to
each z R
n
a set Φ(z) R
q
which may be empty) and (¯z, ¯v) cl GrΦ, where GrΦ
denotes the graph of Φ; i.e., (z, v) GrΦ if and only if v Φ(z). The set-valued
maps from R
q
into R
n
defined by
D
π
Φ(¯z, ¯v)(η) = {ζ R
n
: (ζ, η) N
π
((¯z, ¯v), GrΦ)},
D
Φ(¯z, ¯v)(η) = {ζ R
n
: (ζ, η)
ˆ
N((¯z, ¯v), GrΦ)}
are called the proximal and Mordukhovich coderivatives of Φ at point (¯z, ¯v), respec-
tively.
378 J. J. YE
Proposition 2.4. Suppose B is closed, ¯x A, ¯x / B. Then
N
π
(¯x, A B) = N
π
(¯x, A).
Proof. Since ¯x / B and B is closed, there exists a neighborhood of ¯x that is not
contained in B. Therefore, from the definition of the proximal normal cone, we have
N
π
(¯x, A B) = N
π
(¯x, A).
In the following proposition we show that the proximal normal cone of a union of
a finite number of sets is the intersection of the proximal cones.
Proposition 2.5. Let =
m
i=1
i
and ¯x
m
i=1
i
. Suppose
i
i = 1, 2, . . . , m
are closed. Then
N
π
(¯x, Ω) =
m
i=1
N
π
(¯x,
i
).
Proof. Let ζ N
π
(¯x, Ω). Then, by definition, there exists a constant M > 0 such
that
hζ, x ¯xi M kx ¯xk
2
x =
m
i=1
i
.
Since ¯x
m
i=1
i
, the above inequality implies that ζ
m
i=1
N
π
(¯x,
i
).
Conversely, suppose ζ
m
i=1
N
π
(¯x,
i
). Then for all i = 1, 2, . . . , m, there exists
M
i
> 0 such that
hζ, x ¯xi M
i
kx ¯xk
2
x
i
.
That is, there exists M = max
i∈{1,2,...,m}
M
i
> 0 such that
hζ, x ¯xi M kx ¯xk
2
x =
m
i=1
i
,
which implies that ζ N
π
(¯x, Ω).
The above decomposition formula for calculating the proximal normal cones turns
out to be very useful, since when a set can be written as a union of some convex sets,
the task of calculating the proximal normal cones is reduced to calculating the normal
cone to convex sets which are easier to calculate. The following proposition is a nice
application of the decomposition formula and will be used to calculate the proximal
normal cone to the graph of the set-valued map N
R
q
+
for general q in Proposition 2.7.
Proposition 2.6.
N
π
((¯x, ¯y), GrN
R
+
) =
{0} × R if ¯x > 0, ¯y = 0,
R × {0} if ¯x = 0, ¯y < 0,
(−∞, 0] × [0, ) if ¯x = ¯y = 0.
Proof. It is easy to see that GrN
R
+
=
1
2
, where
1
= [0, ) × {0} and
2
= {0} × (−∞, 0].
We discuss the following three cases.
Case 1. ¯x > 0, ¯y = 0.
In this case, (¯x, ¯y)
1
and (¯x, ¯y) /
2
. Since
2
is closed, by Proposition 2.4
we have in this case
N
π
((¯x, ¯y), GrN
R
+
) = N ((¯x, ¯y),
1
) = {0} × R.
OPTIMIZATION PROBLEMS WITH COMPLEMENTARITY CONSTRAINTS 379
Case 2. ¯x = 0, ¯y < 0.
In this case, (¯x, ¯y)
2
and (¯x, ¯y) /
1
. Since
1
is closed, by Proposition 2.4
we have in this case
N
π
((¯x, ¯y), GrN
R
+
) = N ((¯x, ¯y),
2
) = R × {0}.
Case 3. ¯x = ¯y = 0.
In this case, (¯x, ¯y)
1
2
. By Proposition 2.5 we have
N
π
((¯x, ¯y), GrN
R
+
) = N ((¯x, ¯y),
1
) N ((¯x, ¯y),
2
)
= ((−∞, 0] × R) (R × [0, ))
= (−∞, 0] × [0, ).
Now we are in a position to give an expression for the proximal normal cone to
the graph of the set-valued map N
R
q
+
for general q.
Proposition 2.7. For any (¯x, ¯y) GrN
R
q
+
, define
L := L(¯x) : = {i {1, 2, . . . , q} : ¯x
i
> 0},
I
+
:= I
+
(¯x, ¯y) : = {i {1, 2, . . . , q} : ¯x
i
= 0, ¯y
i
< 0},
I
0
:= I
0
(¯x, ¯y) : = {i {1, 2, . . . , q} : ¯x
i
= 0, ¯y
i
= 0}.
Then
N
π
((¯x, ¯y), GrN
R
q
+
) = {(γ, η) R
2q
: η
I
0
0, η
I
+
= 0, γ
L
= 0, γ
I
0
0}.
Proof. Since
GrN
R
q
+
=
(x, y) R
2q
: y N (x, R
q
+
)
=
(x, y) R
2q
: y N (x
1
, R
+
) × N (x
2
, R
+
) × · · · × N (x
q
, R
+
)
=
(x, y) R
2q
: (x
i
, y
i
) GrN
R
+
i = 1, 2, . . . , q
,
we have
(x, y) GrN
R
q
+
if and only if (x
1
, y
1
, x
2
, y
2
, . . . , x
q
, y
q
)
q
Y
i=1
GrN
R
+
.
Hence from the definition, it is clear that
(γ, η) N
π
((¯x, ¯y), GrN
R
q
+
)
if and only if
(γ
1
, η
1
, γ
2
, η
2
, . . . , γ
q
, η
q
) N
π
(¯x
1
, ¯y
1
, ¯x
2
, ¯y
2
, . . . , ¯x
q
, ¯y
q
),
q
Y
i=1
GrN
R
+
!
=
q
Y
i=1
N
π
((¯x
i
, ¯y
i
), GrN
R
+
).
The rest of the proof follows from Proposition 2.6.
It turns out that we can express any element of N
π
((¯x, ¯y),GrN
R
q
+
) by a system
of nonlinear equations as in the following proposition.
380 J. J. YE
Proposition 2.8.
(γ, η) N
π
((¯x, ¯y), GrN
R
q
+
)
if and only if there exist α, β R
2q
+
such that
0 =
q
X
i=1
¯x
i
(α
i
+ β
i
)
q
X
i=1
¯y
i
(α
q+i
+ β
q+i
),(2.1)
γ
i
= α
i
+ ¯y
i
β
i
i = 1, 2, . . . , q,(2.2)
η
i
= α
q+i
+ ¯x
i
β
q+i
i = 1, 2, . . . , q.(2.3)
Proof. By Proposition 2.7, (γ, η) N
π
((¯x, ¯y), GrN
R
q
+
) if and only if
η
I
0
0, η
I
+
= 0, γ
L
= 0, γ
I
0
0.
By the definition for the index sets I
0
, I
+
, L in Proposition 2.7, we have
η
I
0
0, γ
I
0
0 if and only if ¯x
i
= ¯y
i
= 0 = η
i
0, γ
i
0,
η
I
+
= 0 if and only if ¯y
i
< 0 = η
i
= 0,
γ
L
= 0 if and only if ¯x
i
> 0 = γ
i
= 0.
Since for any (¯x, ¯y) GrN
R
q
+
, ¯x 0, ¯y 0, for nonnegative vectors α and β, (2.1) is
equivalent to
¯x
i
(α
i
+ β
i
) = 0, ¯y
i
(α
q+i
+ β
q+i
) = 0 i = 1, . . . , q.
Hence the existence of nonnegative vectors α and β satisfying (2.1)–(2.2) is equivalent
to the following condition:
¯x
i
= ¯y
i
= 0 = η
i
0, γ
i
0,
¯y
i
< 0 = η
i
= 0,
¯x
i
> 0 = γ
i
= 0.
Consequently, it is equivalent to
η
I
0
0, η
I
+
= 0, γ
L
= 0, γ
I
0
0.
The proof of the proposition is therefore complete.
Finally, we would like to recall the following definition of a very mild constraint
qualification called “calmness,” introduced by Clarke [1].
Definition 2.9. Let ¯x be a local solution to the following mathematical program-
ming problem:
minimize f(x)
s.t. g(x) 0,
h(x) = 0,
x C,
OPTIMIZATION PROBLEMS WITH COMPLEMENTARITY CONSTRAINTS 381
where f : R
d
R
n
, g : R
d
R
m
, and C is a closed subset of R
d
. The above
mathematical programming problem is said to be calm at ¯x provided that there exist
positive ǫ and M such that for all (p, q) ǫB, for all x in ¯x + ǫB satisfying g(x) + p
0, h(x) + q = 0, x C, one has
f(x) f (¯x) + Mk(p, q)k 0,
where B is the open unit ball in the appropriate space.
It is well known that the calmness condition is a constraint qualification for the
existence of a KKT multiplier and the sufficient conditions for the calmness condition
include the linear independence condition, the Slater condition, and the Mangasarian–
Fromowitz condition. Moreover, the calmness condition is satisfied automatically in
the case where the feasible region is a polyhedron.
3. Optimality conditions for OPCC. Let (¯x, ¯y, ¯u) and g(¯x, ¯y, ¯u) 0,
L(¯x, ¯y, ¯u) = 0. Let
L(¯u) := {1 i q : ¯u
i
> 0},
I
+
(¯x, ¯y, ¯u) := {1 i q : ¯u
i
= 0, ψ
i
(¯x, ¯y, ¯u) < 0},
I
0
(¯x, ¯y, ¯u) := {1 i q : ¯u
i
= 0, ψ
i
(¯x, ¯y, ¯u) = 0}.
Where there is no confusion, we simply use L, I
+
, I
0
instead of L(¯u), I
+
(¯x, ¯y, ¯u),
I
0
(¯x, ¯y, ¯u), respectively. It is clear that {1, 2, . . . , q} = L(¯u) I
+
(¯x, ¯y, ¯u) I
0
(¯x, ¯y, ¯u).
Let
F =
(x, y, u) :
L(x, y, u) = 0, g(x, y, u) 0
hu, ψ(x, y, u)i = 0, u 0, ψ(x, y, u) 0
be the feasible region of (OPCC). For any I {1, 2, . . . , q}, let
F
I
:=
(x, y, u) :
L(x, y, u) = 0, g(x, y, u) 0
u
i
0, ψ
i
(x, y, u) = 0 i I
u
i
= 0, ψ
i
(x, y, u) 0 i {1, 2, . . . , q}\I
denote a piece of the feasible region F .
Taking the “piecewise programming” approach in the terminology of [4], as in
Corollary 2 of [5], we observe that the feasible region of the problem (OPCC) can
be rewritten as a union of all pieces F =
I⊆{1,2,...,q}
F
I
. Therefore, a local solution
(¯x, ¯y, ¯u) for (OPCC) is also a local solution for each subproblem of minimizing the
objective function f over a piece which contains the point (¯x, ¯y, ¯u). Moreover, if
(¯x, ¯y, ¯u) is contained in all pieces and all subproblems are convex, then it is a global
minimum for the original problem (OPCC). Hence the following proposition follows
from this observation.
Proposition 3.1. Let (¯x, ¯y, ¯u) be a local optimal solution to (OPCC). Suppose
that f, g, ψ, L are locally Lipschitz near (¯x, ¯y, ¯u) and is closed. If for any given
index set α I
0
, the problem of minimizing f over F
αL
is calm in the sense of
Definition 2.9 at (¯x, ¯y, ¯u), then there exist ξ R
l
, ζ R
d
, η R
q
, γ R
q
such that
0
ˆ
f (¯x, ¯y, ¯u) +
l
X
i=1
ξ
i
ˆ
L
i
(¯x, ¯y, ¯u) +
d
X
i=1
ζ
i
ˆ
g
i
(¯x, ¯y, ¯u) +
ˆ
N((¯x, ¯y, ¯u), Ω)
q
X
i=1
η
i
ˆ
ψ
i
(¯x, ¯y, ¯u) + {(0, 0, γ)},(3.1)
382 J. J. YE
ζ 0, hζ, g(¯x, ¯y, ¯u)i = 0,(3.2)
η
I
0
\α
0, η
I
+
= 0, γ
L
= 0, γ
α
0.(3.3)
Conversely, let (¯x, ¯y, ¯u) be a feasible solution for (OPCC), and for all index sets
α I
0
, there exist ξ R
l
, ζ R
d
, η R
q
, γ R
q
such that (3.1)–(3.3) are satisfied.
If f is either convex or pseudoconvex, g is convex, ψ, L are affine, and is convex,
then (¯x, ¯y, ¯u) is a minimum of f over all (x, y, u)
αI
0
F
αL.
If in addition to the
above assumptions I
0
= {1, 2, . . . , q}, then (¯x, ¯y, ¯u) is a global solution for (OPCC).
Proof. It is obvious that the feasible region of (OPCC) can be represented as
the union of pieces F =
I⊆{1,2,...,q}
F
I
. Since ¯u
i
> 0 i L(¯u) and ψ
i
(¯x, ¯y, ¯u) < 0
i I
+
(¯x, ¯y, ¯u), and
F
αL
=
(x, y, u) :
L(x, y, u) = 0, g(x, y, u) 0
u
i
0, ψ
i
(x, y, u) = 0 i α
u
i
0, ψ
i
(x, y, u) = 0 i L
u
i
= 0, ψ
i
(x, y, u) 0 i I
+
u
i
= 0, ψ
i
(x, y, u) 0 i I
0
\α
,
we have
(¯x, ¯y, ¯u)
αI
0
F
αL
and
(¯x, ¯y, ¯u) / F \(
αI
0
F
αL
).
Hence if (¯x, ¯y, ¯u) is optimal for (OPCC), then for any given index set α I
0
, (¯x, ¯y, ¯u)
is also a minimum for f over F
αL
. Since this problem is calm, by the well-known
nonsmooth necessary optimality condition (see, e.g., [1, 2, 3]), there exist
ξ R
l
,
ζ R
d
, η R
q
, γ R
q
such that (3.1)–(3.3) are satisfied. Conversely, suppose
that for each α I
0
there exist ξ R
l
, ζ R
d
, η R
q
, γ R
q
such that (3.1)–
(3.3) are satisfied and the problem is convex. By virtue of Remarks 2.1 and 2.2, the
limiting subgradients and the limiting normal cones coincide with the subgradients
and the normal cone in the sense of convex analysis, respectively. Hence, by the
standard first-order sufficient optimality conditions, (¯x, ¯y, ¯u) is a minimum of f over
F
αL
for each α I
0
and hence is a minimum of f over
αI
0
F
αL
. In the case when
I
0
= {1, 2, . . . , q}, L = and the feasible region F =
αI
0
F
αL.
Hence (¯x, ¯y, ¯u)
is a global optimal for (OPCC) in this case. The proof of the proposition is now
complete.
Remark 3.1. The necessary part of the above proposition with smooth problem
data is given by Luo, Pang, and Ralph in [4] under the so-called “basic constraint
qualification.”
Note that the multipliers in Proposition 3.1 depend on the index set α through
(3.3). However, if for some pair of index sets α ( I
0
) and I
0
\α, the components
(η
I
0
, γ
I
0
) of the multipliers are the same, then we would have a necessary condition
that does not depend on the index set α. In this case the necessary condition turns
out to be the necessary condition involving the proximal coderivatives as in (b) of the
following theorem.
Theorem 3.2. Suppose f, g, L, ψ are continuously differentiable. Then the fol-
lowing three conditions are equivalent:
OPTIMIZATION PROBLEMS WITH COMPLEMENTARITY CONSTRAINTS 383
(a) There exist ξ R
l
, ζ R
d
, η, γ R
q
such that
0 = f (¯x, ¯y, ¯u) +
l
X
i=1
ξ
i
L
i
(¯x, ¯y, ¯u) +
d
X
i=1
ζ
i
g
i
(¯x, ¯y, ¯u)
q
X
i=1
η
i
i
ψ
i
(¯x, ¯y, ¯u) + {(0, 0, γ)},(3.4)
ζ 0, hζ, g(¯x, ¯y, ¯u)i = 0,(3.5)
η
I
0
0, η
I
+
= 0, γ
L
= 0, γ
I
0
0.(3.6)
(b) There exist ξ R
l
, ζ R
d
, η R
q
such that
0 = f (¯x, ¯y, ¯u) +
l
X
i=1
ξ
i
L
i
(¯x, ¯y, ¯u) +
d
X
i=1
ζ
i
g
i
(¯x, ¯y, ¯u)
q
X
i=1
η
i
ψ
i
(¯x, ¯y, ¯u) + {0} × {0} × D
π
N
R
q
+
(¯u, ψ(¯x, ¯y, ¯u))(η),(3.7)
ζ 0, hζ, g(¯x, ¯y, ¯u)i = 0.(3.8)
(c) There exist ξ R
l
, ζ R
d
, η, γ R
q
, α, β R
2q
+
such that (3.4) and (3.5)
are satisfied and
0 =
q
X
i=1
¯u
i
(α
i
+ β
i
)
q
X
i=1
ψ
i
(¯x, ¯y, ¯u)(α
q+i
+ β
q+i
),
η
i
= α
q+i
+ ¯u
i
β
q+i
i = 1, 2, . . . , q,
γ
i
= α
i
+ ψ
i
(¯x, ¯y, ¯u)β
i
i = 1, 2, . . . , q.
Let (¯x, ¯y, ¯u) be a local optimal solution to (OPCC), where = R
n+m+q
. Suppose
that there exists an index set α I
0
such that the problem of minimizing f over F
αL
and the problem of minimizing f over F
(I
0
\α)L
are calm. Furthermore, suppose that
0 =
l
X
i=1
ξ
i
L
i
(¯x, ¯y, ¯u) +
d
X
i=1
ζ
i
g
i
(¯x, ¯y, ¯u)
q
X
i=1
η
i
ψ
i
(¯x, ¯y, ¯u) + {(0, 0, γ)},(3.9)
0 = hζ, g(¯x, ¯y, ¯u)i , η
I
+
= 0, γ
L
= 0(3.10)
implies that η
I
0
= 0, γ
I
0
= 0. Then the three equivalent conditions (a)–(c) hold.
Conversely, let (¯x, ¯y, ¯u) be a feasible solution to (OPCC), where = R
n+m+q
and
let f be pseudoconvex, g be convex, ψ, L be affine. If one of the equivalent conditions
(a)–(c) holds, then (¯x, ¯y, ¯u) is a minimum of f over all (x, y, u)
αI
0
F
αL
. If in
addition to the above assumptions I
0
= {1, 2, . . . , q}, then (¯x, ¯y, ¯u) is a global solution
for (OPCC).
Proof. By the definition of the proximal coderivatives (Definition 2.3),
γ D
π
N
R
q
+
(¯u, ψ(¯x, ¯y, ¯u))(η)
if and only if
(γ, η) N
π
((¯u, ψ(¯x, ¯y), GrN
q
+
).
384 J. J. YE
Hence the equivalence of condition (a) and condition (b) follows from Proposition 2.7.
The equivalence of condition (b) and condition (c) follows from Proposition 2.8.
Let (¯x, ¯y, ¯u) be a local optimal solution to (OPCC), where = R
n+m+q
. Then
it is also a local optimal solution to the problem of minimizing f over F
αL
and
the problem of minimizing f over F
(I
0
\α)L
. By the calmness assumption for these
two problems, there exist ξ
i
R
l
, ζ
i
R
d
, η
i
R
q
, γ
i
R
q
, i = 1, 2, satisfying
(3.1)–(3.3), which implies that
0 =
l
X
i=1
(ξ
1
i
ξ
2
i
)L
i
(¯x, ¯y, ¯u) +
d
X
i=1
(ζ
1
i
ζ
2
i
)g
i
(¯x, ¯y, ¯u)
q
X
i=1
(η
1
i
η
2
i
)ψ
i
(¯x, ¯y, ¯u) + {(0, 0, γ
1
γ
2
)},
0 = hζ
1
ζ
2
, g(¯x, ¯y, ¯u)i, (η
1
η
2
)
I
+
= 0, (γ
1
γ
2
)
L
= 0.
By the assumption we arrive at η
1
I
0
= η
2
I
0
, γ
1
I
0
= γ
2
I
0
. Since by (3.3), η
1
I
0
\α
0, γ
1
α
0
and η
2
α
0, γ
2
I
0
\α
0, we have
η
1
I
0
= η
2
I
0
0, γ
1
I
o
= γ
2
I
0
0.
That is, condition (a) holds.
The sufficient part of the theorem follows from the sufficient part of Proposition
3.1.
As observed in [4, Proposition 4.3.5], the necessary optimality conditions (3.4)–
(3.6) happen to be the KKT condition for the relaxed problem
(RP) minf(x, y, u)
s.t. u
i
0, ψ
i
(x, y, u) = 0 i L(¯u),
u
i
= 0, ψ
i
(x, y, u) 0 i I
+
(¯x, ¯y, ¯u),
u
i
0, ψ
i
(x, y, u) 0 i I
0
(¯x, ¯y, ¯u),
L(x, y, u) = 0, g(x, y, u) 0,
and (ξ, ζ, η, γ) satisfies (3.4)–(3.6) if and only if it satisfies the KKT condition for the
subproblem of minimizing f over the feasible region F
αL
, i.e., (3.1)–(3.3) with the
smooth problem data and = R
n+m+q
, for all index sets α I
0
(¯x, ¯y, ¯u). Conse-
quently, if the strict Mangasarian–Fromovitz constraint qualification (SMFCQ) holds
for problem (RP) at (ξ, ζ, η, γ) which satisfies (3.4)–(3.6), then (ξ, ζ, η, γ) is the unique
multiplier which satisfies (3.4)–(3.6). Since the index sets α only affect the (η
I
0
, γ
I
0
)
components of the multiplier (ξ, ζ, η, γ), we observe that the existence of multipliers
satisfying (3.4)–(3.6) is equivalent to the existence of multipliers satisfying (3.1)–(3.3)
for all index sets α I
0
(¯x, ¯y, ¯u) with the components (η
I
0
, γ
I
0
) having the same sign.
From the proof of Theorem 3.2, it is easy to see that the condition that no nonzero vec-
tors satisfy (3.9)–(3.10) is a sufficient condition for the existence of common (η
I
0
, γ
I
0
)
components of the multiplier (ξ, ζ, η, γ) for all index sets α I
0
(¯x, ¯y, ¯u). Hence this
condition refines the sufficient condition of a unique multiplier such as the SMFCQ
for the relaxed problem proposed in [4, Proposition 4.3.5].
We now give an example which does not have a unique multiplier satisfying (3.4)–
(3.6) but does satisfy the condition proposed in Theorem 3.2.
OPTIMIZATION PROBLEMS WITH COMPLEMENTARITY CONSTRAINTS 385
Example 3.1 (see [4, Example 4.3.6]). Consider the following OPCC:
minimize x
3
+ u
1
+ u
2
s.t. u 0, ψ(x, u) := (x
1
u
1
, x
2
u
2
) 0,
hu, ψ(x, u)i = 0,
x
3
0, 2x
3
0.
(¯x, ¯u) = (¯x
1
, ¯x
2
, 0, 0, 0), where ¯x
1
, ¯x
2
are any real numbers, are obviously solutions
to the above problem. As pointed out in [4, Example 4.3.6], SMFCQ does not hold
for this problem. However, we can verify that it satisfies our condition. Indeed, the
equation (3.9) for this problem is
0 = ζ
1
(0, 0, 1, 0, 0) + ζ
2
(0, 0, 2, 0, 0) η
1
(1, 0, 0, 1, 0)
η
2
(0, 1, 0, 0, 1) + (0, 0, 0, γ
1
, γ
2
),
which implies that η = 0, γ = 0.
Moreover, the calmness condition is satisfied since the constraint region for each
subproblem F
αL
is a polyhedron due to the fact that ψ and g are both affine. Hence
by Theorem 3.2, if (¯x, ¯u) is a local minimum to the above problem, then there exist
ζ, η, γ such that
0 = (0, 0, 1, 1, 1) + ζ
1
(0, 0, 1, 0, 0) + ζ
2
(0, 0, 2, 0, 0)
η
1
(1, 0, 0, 1, 0) η
2
(0, 1, 0, 0, 1) + (0, 0, 0, γ
1
, γ
2
),
ζ 0, ζ
1
¯x
3
= 0, 2ζ
2
¯x
3
= 0,
η
I
0
0, η
I
+
= 0, γ
L
= 0, γ
I
0
0,
which implies η
1
= η
2
= 0, γ
1
= γ
2
= 1, and ¯x
3
= 0. Since I
0
(¯x, ¯u) = {1, 2} for
(¯x, ¯u) = 0, 0 is a global optimal solution according to Theorem 3.2 and (¯x, 0, 0) with
¯x 6= 0 are local optimal solutions.
To illustrate the application of the result obtained, we now consider the follow-
ing bilevel programming problem (BLQP), where the lower level problem is linear
quadratic:
(BLQP) min f(x, y)
s.t. y S(x),
Gx + Hy + a 0,
where G and H are l × n and l × m matrices, respectively, a R
l
, and S(x) is the
solution set of the quadratic programming problem with parameter x:
(QP
x
) min hy, P xi +
1
2
hy, Qyi + p
t
x + q
t
y
s.t. Dx + Ey + b 0,
where Q R
m×m
is a symmetric and positive semidefinite matrix, p R
n
, q R
m
,
P R
m×n
, D and E are q × n and q × m matrices, respectively, and b R
q
.
Replacing the bilevel constraint by the KKT condition for the lower level problem,
it is easy to see that (BLQP) is equivalent to the problem
(KKT) min f(x, y)
s.t. hDx + Ey + b, ui = 0, u 0, Dx + Ey + b 0,
Qy + P x + q + E
u = 0,
Gx + Hy + a 0,
386 J. J. YE
which is an OPCC. Let (¯x, ¯y) be an optimal solution of (BLQP) and ¯u a corresponding
multiplier; i.e,
0 = Q¯y + P ¯x + q + E
¯u,(3.11)
hD¯x + E ¯y + b, ¯ui = 0, u 0.(3.12)
Then
L = {1 i q : ¯u
i
> 0},
I
+
= {1 i q : ¯u
i
= 0, (D¯x + E ¯y + b)
i
< 0},
I
0
= {1 i q : ¯u
i
= 0, (D¯x + E ¯y + b)
i
= 0}.
The feasible region of problem (KKT) is
F =
(x, y, u) R
n+m+q
:
Qy + P x + q + E
u = 0, Gx + Hy + a 0
hu, Dx + Ey + bi = 0, u 0, Dx + Ey + b 0
,
and for any I {1, 2, . . . , q},
F
I
=
(x, y, u) R
n+m+q
:
Qy + P x + q + E
u = 0, Gx + Hy + a 0
u
i
0, (Dx + Ey + b)
i
= 0 i I
u
i
= 0, (Dx + Ey + b)
i
0 i {1, 2, . . . , q}\I
.
Since F
αL
for any index set α I
0
has linear constraints only, the problem of
minimizing f over F
αL
is calm. Hence the following result follows from Proposition
3.1.
Corollary 3.3. Let (¯x, ¯y) be an optimal solution of (BLQP) and ¯u a corre-
sponding multiplier. Suppose that f is locally Lipschitz near (¯x, ¯y). Then for each
α I
0
, there exist ξ R
m
, ζ R
d
, η R
q
such that
0
ˆ
f (¯x, ¯y) + {P
ξ} × Q
ξ + {G
ζ} × {H
ζ} {D
η} × {E
η},
ζ 0, hG¯x + H ¯y + a, ζi = 0,
η
α
0, η
I
+
= 0, (Eξ)
L
= 0, (Eξ)
α
0.
If f is either convex or pseudoconvex, then the above necessary condition is also
sufficient for a feasible solution (¯x, ¯y, ¯u) of (KKT) to be a minimum of f over all
(x, y, u)
αI
0
F
αL
. In particular, if f is either convex or pseudoconvex and I
0
=
{1, 2, . . . , q}, then the above condition is sufficient for a feasible solution (¯x, ¯y) to be
a global optimum for (BLQP).
The following result follows from Theorem 3.2.
Corollary 3.4. Let (¯x, ¯y) be an optimal solution of (BLQP) and ¯u a corre-
sponding multiplier. Suppose that f is C
1
and
0 = P
ξ + G
ζ D
η,(3.13)
0 =
Q
ξ + H
ζ E
η,(3.14)
0 = hG¯x + H ¯y + a, ζi ,(3.15)
η
I
+
= 0, (Eξ)
L
= 0
implies η
I
0
= (Eξ)
I
0
= 0. Then there exist ξ R
m
, ζ R
d
, η R
q
such that
0 = f (¯x, ¯y) + {P
ξ} × Q
ξ + {G
ζ} × {H
ζ} {D
η} × {E
η},(3.16)
ζ 0, hG¯x + H ¯y + a, ζi = 0,(3.17)
η
I
0
0, η
I
+
= 0, (Eξ)
L
= 0, (Eξ)
I
0
0.
OPTIMIZATION PROBLEMS WITH COMPLEMENTARITY CONSTRAINTS 387
Equivalently, there exist ξ R
l
, ζ R
d
, η R
q
such that (3.16)–(3.17) are satisfied
and
(Eξ, η) N
π
((¯u, D¯x + E ¯u + b), GrN
R
q
+
).
Equivalently, there exist ξ R
l
, ζ R
d
, η R
q
, α, β R
2q
+
such that (3.16)–(3.17)
are satisfied and
0 =
q
X
i=1
¯u
i
(α
i
+ β
i
)
q
X
i=1
(D¯x + E ¯y + b)
i
(α
q+i
+ β
q+i
),
η
i
= α
q+i
+ ¯u
i
β
q+i
i = 1, 2, . . . , q,
(Eξ)
i
= α
i
(D¯x + E ¯y + b)
i
β
i
i = 1, 2, . . . , q.
Conversely, let (¯x, ¯y) be any vector in R
n+m
satisfying the constraints G¯x+H ¯y+a 0
and D¯x + E ¯y + b 0 and f be pseudoconvex. If there exists ¯u R
q
that satisfies
(3.11)–(3.12) such that one of the above equivalent conditions holds, then (¯x, ¯y, ¯u) is a
minimum of f over all (x, y, u)
αI
0
F
αL
. In addition to the above assumptions,
if I
0
= {1, 2, . . . , q}, then (¯x, ¯y) is a global minimum for (BLQP).
Acknowledgments. The author would like to thank Dr. Qing Lin for a helpful
discussion of Proposition 2.8.
REFERENCES
[1] F. H. Clarke, Optimization and Nonsmooth Analysis, Wiley-Interscience, New York, 1983;
reprinted by SIAM, Philadelphia, 1990.
[2] F .H. Clarke, Methods of Dynamic and Nonsmooth Optimization, CBMS-NSF Regional Con-
ference Series in Applied Mathematics, Vol. 57, SIAM, Philadelphia, 1989.
[3] P. D. Loewen, Optimal Control via Nonsmooth Analysis, CRM Proceedings and Lecture
Notes, AMS, Providence, RI, 1993.
[4] Z.-Q. Luo, J.-S. Pang, and D. Ralph, Mathematical Programs with Equilibrium Constraints,
Cambridge University Press, London, UK, 1996.
[5] Z.-Q. Luo, J.-S. Pang, and D. Ralph, Piecewise Sequential Quadratic Programming for
Mathematical Programs with Nonlinear Complementarity Constraints, in Multilevel Op-
timization: Algorithms and Applications, Nonconvex Optim. Anal. 20, Kluwer Academic
Publishers, Norwell, MA, 1998.
[6] B. S. Mordukhovich, Generalized differential calculus for nonsmooth and set-valued map-
pings, J. Math. Anal. Appl., 183 (1994), pp. 250–288.
[7] J. J. Ye and X. Y. Ye, Necessary optimality conditions for optimization problems with vari-
ational inequality constraints, Math. Oper. Res., 22 (1997), pp. 977–997.
[8] J. J. Ye and D. L. Zhu, Optimality conditions for bilevel programming problems, Optimization,
33 (1995), pp. 9–27.
[9] J. J. Ye, D. L. Zhu, and Q. J. Zhu. Exact penalization and necessary optimality conditions
for generalized bilevel programming problems, SIAM J. Optim., 7 (1997), pp. 481–507.
... However, it is well-known that MPECs are generally difficult to deal with because their constraints fail to satisfy the standard Mangasarian-Fromovitz constraint qualification (MFCQ) at any feasible point (Ye, 2005). A lot of researches have been done during the last several decades to study various optimality conditions for MPECs Kanzow and Schwartz, 2010;Ye, 2005Ye, , 1999. ...
... For a smooth MPEC, one popular constraint qualification for the S-stationarity is the so-call MPEC-linear independence constraint qualification (MPEC-LICQ) (Ye, 1999). For a nonsmooth MPEC, Theorem 1 given in Ye and Zhu (2010) shows that the MPEC-LICQ is a constraint qualification for the S-stationarity, however, Example 2.1 given in Ye and Zhang (2014) indicates that the above conclusion is incorrect, that is, the MPEC-LICQ cannot ensure the establishment of Sstationarity conditions. ...
Article
This paper considers a mathematical problem with equilibrium constraints (MPEC) in which the objective is locally Lipschitz continuous but not continuously differentiable everywhere. Our focus is on constraint qualifications for the nonsmooth S-stationarity in the sense of the limiting subdifferentials. First, although the MPEC-LICQ is not a constraint qualification for the nonsmooth S-stationarity, we show that the MPEC-LICQ can serve as a constraint qualification for the nonsmooth S-stationarity under some kind of regularity. Then, we extend some new constraint qualifications for nonlinear programs to the considered nonsmooth MPEC and show that all of them can serve as constraint qualifications for the nonsmooth S-stationarity. We further extend these results to the multiobjective case.
... The single-and multi-objective optimization is related to the form of the objective function [12]; • Nonlinear formulas indicate whether local solutions can be detected [13]; • Optimization with complementarity constraints (mathematical programs with complementarity constraints (MPCC)) considers special pairs of restrictions [14][15][16]; • Mixed-integer nonlinear programming (MINLP) combines constraints with combinatorial problems [17]. ...
Article
Full-text available
An optimization task with nonlinear differential-algebraic equations (DAEs) was approached. In special cases in heat and mass transfer engineering, a classical direct shooting approach cannot provide a solution of the DAE system, even in a relatively small range. Moreover, available computational procedures for numerical optimization, as well as differential- algebraic systems solvers are characterized by their limitations, such as the problem scale, for which the algorithms can work efficiently, and requirements for appropriate initial conditions. Therefore, an αDAE model optimization algorithm based on an α-model parametrization approach was designed and implemented. The main steps of the proposed methodology are: (1) task discretization by a multiple-shooting approach, (2) the design of an α-parametrized system of the differential-algebraic model, and (3) the numerical optimization of the α-parametrized system. The computations can be performed by a chosen iterative optimization algorithm, which can cooperate with an outer numerical procedure for solving DAE systems. The implemented algorithm was applied to solve a counter-flow exchanger design task, which was modeled by the highly nonlinear differential-algebraic equations. Finally, the new approach enabled the numerical simulations for the higher values of parameters denoting the rate of changes in the state variables of the system. The new approach can carry out accurate simulation tests for systems operating in a wide range of configurations and created from new materials.
... Other results related to this problem can be found in the aforementioned references and in [75,82,113]. ...
Thesis
This thesis focuses on the optimal control of Linear Complementarity Systems (LCS). LCS are dynamical systems defined through Differential Algebraic Equations (DAE), where one of the variable is defined by a Linear Complementarity Problem.These systems can be found in the modeling of various phenomena, as Nash equilibria, hybrid dynamical systems or modeling of electrical circuits. Properties of the solution to these DAE essentially depend on properties that the matrix D in the complementarity must meet. These complementarity constraints induce two different challenges. First, the analysis of these dynamical systems often use state of the art tools, and their study still has some unansweredquestions. Second, the optimal control of these systems causes troubles due to on one hand the presence of the state in the constraints, on the other hand the violation of Constraint Qualifications, that are a recurring hypothesis for optimisation problems.The research presented in this manuscript focuses on the optimal control of these systems. We mainly focus on the quadratic optimal control problem (minimisation of a quadratic functional involving the state and the control), and the minimal time control. The results present two different aspects: first, we start with an analytical approach in order to find necessary conditions of optimality (if possible, these conditions are proved to be sufficient); secondly, a numerical approach is tackled, with the aim of getting precise results with a reduced computational time.
Article
Full-text available
This paper studies stationary points in mathematical programs with cone complementarity constraints (CMPCC). We begin by reviewing various formulations of CMPCC and revisiting definitions for Bouligand, proximal strong, regular strong, Wachsmuth’s strong, L-strong, weak, as well as Mordukhovich and Clarke stationary points, establishing a comprehensive framework for CMPCC. Building on key principles related to cone faces and their properties, we introduce a novel stationarity concept, facial stationarity, which naturally extends the weak stationarity condition in the CMPCC context. Finally, we analyze the hierarchical relations between these different types of stationary points.
Article
Full-text available
This paper provides a theoretical and numerical investigation of a penalty decomposition scheme for the solution of optimization problems with geometric constraints. In particular, we consider some situations where parts of the constraints are nonconvex and complicated, like cardinality constraints, disjunctive programs, or matrix problems involving rank constraints. By a variable duplication and decomposition strategy, the method presented here explicitly handles these difficult constraints, thus generating iterates which are feasible with respect to them, while the remaining (standard and supposingly simple) constraints are tackled by sequential penalization. Inexact optimization steps are proven sufficient for the resulting algorithm to work, so that it is employable even with difficult objective functions. The current work is therefore a significant generalization of existing papers on penalty decomposition methods. On the other hand, it is related to some recent publications which use an augmented Lagrangian idea to solve optimization problems with geometric constraints. Compared to these methods, the decomposition idea is shown to be numerically superior since it allows much more freedom in the choice of the subproblem solver, and since the number of certain (possibly expensive) projection steps is significantly less. Extensive numerical results on several highly complicated classes of optimization problems in vector and matrix spaces indicate that the current method is indeed very efficient to solve these problems.
Article
Full-text available
A new numerical method is presented for bilevel programs with a nonconvex follower’s problem. The basic idea is to piecewise construct convex relaxations of the follower’s problems, replace the relaxed follower’s problems equivalently by their Karush–Kuhn–Tucker conditions and solve the resulting mathematical programs with equilibrium constraints. The convex relaxations and needed parameters are constructed with ideas of the piecewise convexity method of global optimization. Under mild conditions, we show that every accumulation point of the optimal solutions of the sequence approximate problems is an optimal solution of the original problem. The convergence theorems of this method are presented and proved. Numerical experiments show that this method is capable of solving this class of bilevel programs.
Article
Full-text available
In this paper we consider the class of mathematical programs with complementarity constraints (MPCC). Under an appropriate constraint qualification of Mangasarian–Fromovitz type we present a topological and an equivalent algebraic characterization of a strongly stable C-stationary point for MPCC. Strong stability refers to the local uniqueness, existence and continuous dependence of a solution for each sufficiently small perturbed problem where perturbations up to second order are allowed. This concept of strong stability was originally introduced by Kojima for standard nonlinear optimization; here, its generalization to MPCC demands a sophisticated technique which takes the disjunctive properties of the solution set of MPCC into account.
Article
We consider a class of mathematical programs with complementarity constraints (MPCC) where the objective function involves a non-Lipschitz sparsity-inducing term. Due to the existence of the non-Lipschitz term, existing constraint qualifications for locally Lipschitz MPCC cannot ensure that necessary optimality conditions hold at a local minimizer. In this paper, we present necessary optimality conditions and MPCC-tailored qualifications for the non-Lipschitz MPCC. The proposed qualifications are related to the constraints and the non-Lipschitz term, which ensure that local minimizers satisfy these necessary optimality conditions. Moreover, we present an approximation method for solving the non-Lipschitz MPCC and establish its convergence. Finally, we use numerical examples of sparse solutions of linear complementarity problems and the second-best road pricing problem in transportation science to illustrate the effectiveness of our approximation method for solving the non-Lipschitz MPCC.
Article
In this paper, we study the difficult class of optimization problems called the mathematical programs with vanishing constraints or MPVC. Extensive research has been done for MPVC regarding stationary conditions and constraint qualifications using geometric approaches. We use the Fritz John approach for MPVC to derive the M-stationary conditions under weak constraint qualifications. An enhanced Fritz John type stationary condition is also derived for MPVC, which provides the notion of enhanced M-stationarity under a new and weaker constraint qualification: MPVC-generalized quasinormality. We show that this new constraint qualification is even weaker than MPVC-CPLD. A local error bound result is also established under MPVC-generalized quasinormality.
Article
We describe some first- and second-order optimality conditions for mathematical programs with equilibrium constraints (MPEC). Mathematical programs with parametric nonlinear complementarity constraints are the focus. Of interest is the result that under a linear independence assumption that is standard in nonlinear programming, the otherwise combinatorial problem of checking whether a point is stationary for an MPEC is reduced to checking stationarity of single nonlinear program. We also present a piecewise sequential quadratic programming (PSQP) algorithm for solving MPEC. Local quadratic convergence is shown under the linear independence assumption and a second-order sufficient condition. Some computational results are given.
Article
The generalized bilevel programming problem (GBLP) is a bilevel mathematical program where the lower level is a variational inequality. In this paper we prove that if the objective function of a GBLP is uniformly Lipschitz continuous in the lower level decision variable with respect to the upper level decision variable, then using certain uniform parametric error bounds as penalty functions gives single level problems equivalent to the GBLP. Several local and global uniform parametric error bounds are presented, and assumptions guaranteeing that they apply are discussed. We then derive Kuhn--Tucker-type necessary optimality conditions by using exact penalty formulations and nonsmooth analysis.
Article
We study some generalized differentiability concepts for multifunctions and non-smooth mappings in finite dimensions. The most attention is paid to the so-called coderivative of multifunctions introduced earlier by the author. This coderivative has many useful applications to optimization and control problems, to sensitivity analysis for generalized equations and variational inequalities, etc. In this paper we develop a rich calculus for the coderivative and related subdifferential constructions using an extremal (variational) approach.
Article
The bilevel programming problem (BLPP) is a sequence of two optimization problems where the constraint region of the upper level problem is determined implicitly by the solution set to the lower level problem. To obtain optimality conditions, we reformulate BLPP as a single level mathematical programming problem (SLPP) which involves the value function of the lower level problem. For this mathematical programming problem, it is shown that in general the usual constraint qualifications do not hold and the right constraint qualification is the calmness condition. It is also shown that the linear bilevel programming problem and the minmax problem satisfy the calmness condition automatically. A sufficient condition for the calmness for the bilevel programming problem with quadratic lower level problem and nondegenerate linear complementar¬ity lower level problem are given. First order necessary optimality condition are given using nonsmooth analysis. Second order sufficient optimality conditions are also given for the case where the lower level problem is unconstrained.
Article
In this note we correct a mistake In our paper “Optimality conditions for blevel programming problems” by giving two correct replacements
Article
In this paper we study optimization problems with variational inequality constraints in finite dimensional spaces. Kuhn-Tucker type necessary optimality conditions involving coderivatives are given under certain constraint qualifications including one that ensures nonexistence of non-trivial abnormal multipliers. The result is applied to bilevel programming problems to obtain Kuhn-Tucker type necessary optimality conditions. The Kuhn-Tucker type necessary optimality conditions are shown to be satisfied without any constraint qualification by the class of bilevel programming problems where the lower level is a parametric linear quadratic problem.
Chapter
This book provides a solid foundation and an extensive study for an important class of constrained optimization problems known as Mathematical Programs with Equilibrium Constraints (MPEC), which are extensions of bilevel optimization problems. The book begins with the description of many source problems arising from engineering and economics that are amenable to treatment by the MPEC methodology. Error bounds and parametric analysis are the main tools to establish a theory of exact penalisation, a set of MPEC constraint qualifications and the first-order and second-order optimality conditions. The book also describes several iterative algorithms such as a penalty-based interior point algorithm, an implicit programming algorithm and a piecewise sequential quadratic programming algorithm for MPECs. Results in the book are expected to have significant impacts in such disciplines as engineering design, economics and game equilibria, and transportation planning, within all of which MPEC has a central role to play in the modelling of many practical problems.