Content uploaded by Houssam Abbas
Author content
All content in this area was uploaded by Houssam Abbas on Oct 18, 2017
Content may be subject to copyright.
Robust Model Predictive Control for Non-Linear Systems with Input and
State Constraints Via Feedback Linearization
Yash Vardhan Pant, Houssam Abbas, Rahul Mangharam
Abstract— Robust predictive control of non-linear systems
under state estimation errors and input and state constraints
is a challenging problem, and solutions to it have generally
involved solving computationally hard non-linear optimizations.
Feedback linearization has reduced the computational burden,
but has not yet been solved for robust model predictive control
under estimation errors and constraints. In this paper, we solve
this problem of robust control of a non-linear system under
bounded state estimation errors and input and state constraints
using feedback linearization. We do so by developing robust
constraints on the feedback linearized system such that the
non-linear system respects its constraints. These constraints
are computed at run-time using online reachability, and are
linear in the optimization variables, resulting in a Quadratic
Program with linear constraints. We also provide robust feasi-
bility, recursive feasibility and stability results for our control
algorithm. We evaluate our approach on two systems to show
its applicability and performance.
I. INTRODUCTION
In this paper we are concerned with the problem of
controlling nonlinear dynamical systems Sof the form ˙x=
f(x) + G(x)uunder state and input constraints, and subject
to errors in the state estimate. This problem is formulated as
min
x,ul(x,u)(1)
s.t. ˙x=f(x) + G(x)u
x∈X, u ∈U
where l(x,u)is a cost function whose minimization over the
state and input trajectories xand uensures stability of the
system. Sets X⊂Rnxand U⊂Rnuencode constraints on
the state (e.g., safety) and the input. The input u=u(ˆx)is
a function of a state estimate that in general differs from the
true state of the system.
The application of Model Predictive Control (MPC) to
nonlinear systems involves the repeated solution of gener-
ally non-quadratic, non-convex optimizations. Various ap-
proaches for solving (or approximately solving) the opti-
mizations and their trade-offs are reviewed in [1]. Another
approach is to first feedback linearize the system S[2]:
namely, the applied control u=u(x, v)is designed in such
a way that the resulting closed-loop dynamics Sfl are now
linear:Sf l : ˙z=Az +Bv. The input vto the linearized
dynamics can now be computed so as to optimize system
performance and ensure stability. The state zof the linearized
*This work was supported by STARnet a Semiconductor Research Cor-
poration program sponsored by MARCO and DARPA, NSF MRI-0923518
and the US Department of Transportation University Transportation Center
Program
The Department of Electrical and Systems Engineer-
ing, University of Pennsylvania, Philadelphia, U.S.A.
{yashpant,habbas,rahulm}@seas.upenn.edu
system Sfl is related to the state xof the nonlinear system
Svia a (system-specific) function T:z=T(x).
Previous work on nonlinear MPC with feedback lin-
earization assumed the state x(t)is perfectly known to the
controller at any moment in time [3]. However in many cases,
only a state estimate ˆx(t)is available, and ˆx(t)6=x(t),
and we handle such cases. Robust MPC (RMPC) has been
investigated as a way of handling state estimation errors
for linear [4] and nonlinear systems [5], [6], but not via
feedback linearization. In particular, for non-linear systems,
[5] develops a non-linear MPC with tube-like constraints
for robust feasibility, but involves solving two (non-convex)
optimal control problems. In [6], the authors solve a non-
linear Robust MPC through a bi-level optimization that in-
volves solving a non-linear, non-smooth optimization which
is challenging. [6] also guarantees a weaker form of recursive
feasibility than [4] and what we guarantee in this work.
In [7] the authors approximate the non-linear dynamics
of a quadrotor by linearizing it around hover and apply
the RMPC of [4] to the linearized dynamics. This differs
significantly from our approach, where we formulate the
RMPC on the feedback linearized dynamics directly, and
not on the dynamics obtained via Jacobian linearization of
the non-linear system. Existing work on MPC via feedback
linearization and input/state constraints has also assumed that
either Tis the identity [3], or, in the case of uncertainties
in the parameters, that there are no state constraints [8]. A
non-identity Tis problematic when the state is not perfectly
known, since the state estimation error e= ˆx−xmaps to
the linearized dynamics via Tin non-trivial ways, greatly
complicating the analysis. In particular, the error bounds for
the state estimate in z-space now depend on the current
nonlinear state x. One of the complications introduced by
feedback linearization is that the bounds on the input (u∈U)
may become a non-convex state-dependent constraint on
the input vto Sfl:V={v(x, U)≤v≤v(x, U)}. In
[3] forward reachability is used to provide inner convex
approximations to the input set V. A non-identity Tincreases
the computational burden since the non-linear reach set must
be computed (with an identity T, the feedback linearized
reach set is sufficient).
Contributions: We develop a feedback linearization so-
lution to the above control problem, with state estimation
errors, input and state constraints, and non-identity T. To the
best of our knowledge, this is the first feedback linearization
solution to this problem. The resulting control problem is
solved by RMPC with time-varying linear constraint sets.
The paper is organized as follows: in the next section we
formulate the feedback linearized control problem. In Sec.
III, we describe the RMPC algorithm we use to solve it, and
prove that it stabilizes the nonlinear system. Sec. IV shows
how to compute the various constraint sets involved in the
RMPC formulation, and Sec. V applies our approach to an
example. Sec. VI concludes the paper. An online technical
report [9] contains proofs and more examples.
II. PROB LEM FORMULATION
A common method for control of nonlinear systems is
Feedback linearization [2]. Briefly, in feedback linearization,
one applies the feedback law u(x, v) = R(x)−1(−b(x) + v)
to (1), so that the resulting dynamics, expressed in terms of
the transformed state z=T(x), are linear time-invariant:
Sfl : ˙z=Acz+Bcv(2)
By using the remaining control authority in vto control Sfl,
we can effectively control the non-linear system for, say,
stability or reference tracking. Tis a diffeomorphism [2] over
a domain D⊂X. The original and transformed states, xand
z, have the same dimension, as do uand v, i.e. nx=nzand
nu=nv. Because we are controlling the feedback linearized
system, we must find constraint sets Zand Vfor the state
zand input v, respectively, such that (z, v)∈Z×V=⇒
(T−1(z), u(T−1(z), v)) ∈X×U. We assume that the system
(1) has no zero dynamics [2] and all states are controllable. In
case there are zero dynamics, then our approach is applicable
to the controllable subset of the states as long as the span of
the rows of G(x)is involutive [2].
For feedback linearizing and controlling (1), only a pe-
riodic state estimate ˆxof xis available. This estimate is
available periodically every τtime units, so we may write
ˆxk:= ˆx(kτ) = xk+ek, where xkand ekare sampled state
and error respectively. We assume that ekis in a bounded set
Efor all k. This implies that the feedback linearized system
can be represented in discrete-time: zk+1 =Azk+Bzk.
The corresponding z-space estimate ˆzkis ˆzk=T(ˆxk). In
general the z-space error ˜ek:= T(ˆxk)−T(xk)is bounded
for every kbut does not necessarily lie in E. Let e
Ekbe the
set containing ˜ek: in Sec. IV-C we show how to compute it.
Because the linearizing control operates on the state estimate
and not xk, we add a process noise term to the linearized,
discrete-time dynamics. Our system model is therefore
zk+1 =Azk+Bvk+wk(3)
where the noise term wklies in the bounded set Wfor all
k. An estimate of Wcan be obtained using the techniques
of this paper. The problem (1) is therefore replaced by:
min
z,vq(z,v) =
∞
X
k=0
zT
kQzk+vT
kRvk(4)
s.t. zk+1 =Azk+Bvk+wk
zk∈Z, vk∈V, wk∈W
In general, the cost function l(x,u)6=q(z,v). The objective
function in (4) is a choice the control designer makes. For
more details and when the quadratic form of q(z,v)is
justified, see [3]. In Thm. 2, we show that minimizing this
cost function, q(z,v), implies stability of the system.
It is easy to derive the dynamics of the state estimate ˆzk:
ˆzk+1 =zk+1 + ˜ek+1 (5)
=Azk+Bvk+wk+ ˜ek+1
=Aˆzk+Bvk+ (wk+ ˜ek+1 −A˜ek)
=Aˆzk+Bvk+bwk+1
where bwk+1 =wk+ ˜ek+1−A˜ek, and lies in the set c
Wk+1 :=
W⊕e
Ek+1 ⊕(−Ae
Ek).
Example 1: Consider the 2D system
˙x1= sin(x2),˙x2=−x2
1+u(6)
The safe set for xis given as X={|x1| ≤ π/2,|x2| ≤ π/3},
and the input set is U= [−2.75,2.75]. For the measurement
y=h(x) = x1, the system can be feedback linearized on the
domain D={x|cos(x2)6= 0}, where it has a relative degree
of ρ= 2. The corresponding linearizing feedback input is
u=−tan(x2)+(cos(x2))v. The feedback linearized system
is ˙z1=z2,˙z2=v, where Tis given by z=T((x1, x2)) =
(x1,sin(x2)). We can analytically compute the safe set in
z-space as Z=T(X) = {|z1| ≤ π/2,|z2| ≤ 0.8660}.
For a more complicated T, it is not possible to obtain
analytical expressions for Z. The computation of Zin this
more general case is addressed in the online appendix [9].
Notation. Given two subsets A, B of Rn, their Minkowski
sum is A⊕B:= {a+b|a∈A, b ∈B}. Their Pontryagin
difference is AB={c∈Rn|c+b∈A∀b∈B}. Given
integers n≤m,[n:m] := {n, n + 1, . . . , m}.
Assumption. Our approach applies when X, U, E and W
are arbitrary convex polytopes (i.e. bounded intersections of
half-spaces). For the sake of simplicity, in this paper we
assume they are all hyper-rectangles that contain the origin
III. ROBUST MPC FOR T HE FE EDBACK LINEAR IZE D
SY STE M
Following [4], [10], we formulate a Robust MPC (RMPC)
controller of (4) via constraint restriction. We outline the
idea before providing the technical details. The key idea
is to move the effects of estimation error ˜ekand process
noise wk(the ‘disturbances’) to the constraints, and work
with the nominal (i.e., disturbance-free) dynamics: ¯zk+1 =
A¯zk+Bvk,¯z0= ˆz0. Because we would be optimizing
over disturbance-free states, we must account for the noise
in the constraints. Specifically, rather than require the next
(nominal) state ¯zk+1 to be in Z, we require it to be in the
shrunk set Zc
Wk+1|ke
Ek+1|k: by definition of Pontryagin
difference, this implies that whatever the actual value of the
noise bwk+1 ∈c
Wk+1|kand of the estimation error ˜ek+1 ∈
e
Ek+1|k, the actual state zk+1 will be in Z. This is repeated
over the entire MPC prediction horizon j= 1, . . . , N, with
further shrinking at every step. For further steps (j > 1), the
process noise bwk+j|kis propagated through the dynamics,
so the shrinking term c
Wis shaped by a stabilizing feedback
controller ¯z7→ K¯z. At the final step (j=N+ 1), a terminal
constraint is derived using the worst case estimation error
set e
Emax and a global inner approximation for the input
constraints, Vinner−global .
Through this successive constraint tightening we ensure
robust safety and feasibility of the feedback linearized system
(and hence of the non-linear system). Since we use just the
nominal dynamics, and show that the tightened constraints
are linear in the state and inputs, we still solve a Quadratic
Program (QP) for the RMPC optimization. The difficulty of
applying RMPC in our setting is that the amounts by which
the various sets are shrunk vary with time because of the
time-varying state estimation error, are state-dependent, and
involve set computations with the non-convexity preserving
mapping T. One of our contributions in this paper is to
establish recursive feasibility of RMPC with time-varying
constraint sets.
The RMPC optimization Pk(ˆzk)for solving (4) is:
J∗(¯zk) = min
¯z,u
N
X
j=0
{¯zT
k+j|kQ¯zk+j|k+vT
k+j|kRvk+j|k}
+ ¯zT
k+N+1|kQf¯zk+N+1|k(7a)
¯zk|k= ˆzk(7b)
¯zk+j+1|k=A¯zk+j|k+Bvk+j|k, j = 0, . . . , N (7c)
¯zk+j|k∈Zk+j|k, j = 0, . . . , N (7d)
vk+j|k∈Vk+j|k, j = 0, . . . , N −1(7e)
pN+1 = [zk+N+1|k, vk+N|k]T∈Pf(7f)
Here, ¯zis the state of the nominal feedback linearized
system. The cost and constraints of the optimization are
explained below:
•Eq. (7a) shows a cost quadratic in ¯zand v, where
as usual Qis positive semi-definite and Ris positive
definite. In the terminal cost term, Qfis the solution of
the Lyapunov equation Qf−(A+BK)TQf(A+BK) =
Q+KTRK. This choice guarantees that the terminal
cost equals the infinite horizon cost under a linear
feedback control ¯z7→ K¯z[11].
•Eq. (7b) initializes the nominal state with the current
state estimate.
•Eq. (7c) gives the nominal dynamics of the discretized
feedback linearized system.
•Eq. (7d) tightens the admissible set of the nominal state
by a sequence of shrinking sets.
•Eq. (7e) constrains vk+j|ksuch that the corresponding
u(x, v)is admissible, and the RMPC is recursively
feasible.
•Eq. (7f) constrains the final input and nominal state to
be within a terminal set Pf.
The details of these sets’ definitions and computations are
given in Sec. IV.
A. State and Input Constraints for the Robust MPC
The state and input constraints for the RMPC are defined
as follows:
The state constraints Zk+j|k:The tightened state con-
straints are functions of the error sets e
Ek+j|kand disturbance
sets c
Wk+j|k, and defined ∀j= 0, . . . , N
Zk+j|k=Zj−1
i=0 (Li
c
Wk+(j−i)|k)(−
e
Ek+j|k)(8)
(Recall Zis a subset of T(X),c
Wk+j|kand e
Ek+j|kbound
the estimation error and noise, resp., and are formally defined
in Sec. IV). The state transition matrix Lj,∀j= 0, . . . , N
is defined as L0=I, Lj+1 = (A+BK)Lj. The intuition
behind this construction was given at the start of this section.
The input constraints Vk+j|k:∀j= 0, ..., N −1
Vk+j|k=Vk+j|kj−1
i=0 KLic
Wk+(j−i)|k(9)
where Vk+j|kis an inner-approximation of the set of admis-
sible inputs vat prediction step j+k|k, as defined in Sec.
IV-B. The intuition behind this construction is similar to that
of Zk+j|j: given the inner approximation Vk|k, it is further
shrunk at each prediction step jby propagating forward the
noise bwkthrough the dynamics, and shaped according to the
stabilizing feedback law K, following [4].
The terminal constraint Pf:This constrains the extended
state pk= [¯zk, vk−1]T, and is given by
Pf=Cp(A+BK)N
K(A+BK)N−1c
Wmax (10)
where c
Wmax ⊂Rnzis a bounding set on the worst-case
disturbance (we show how it’s computed in Sec. IV-C),
and Cp⊂Rnz×Rnvis an invariant set of the nominal
dynamics subject to the stabilizing controller ¯z7→ K¯z,
naturally extended to the extended state p: that is, there exists
a feedback control law p7→ b
Kp, such that ∀p∈Cp
b
Ap +b
Bb
Kp +b
LN[bwT,0T]T∈Cp,∀bw∈c
Wmax (11)
with b
A=A0n×m
0m×n0m×m,b
B=B
Im×m,b
K=
K0m×m,b
LN= ( b
A+b
Bb
K)N. It is important to note
the following:
•The set Pfcan be computed offline since it depends on
c
Wmax,e
Emax and the global inner approximation for
the constraints on v,Vinner−global , all of which can be
computed offline.
•If Pfis non-empty, then all intermediate sets that appear
in (7) are also non-empty, since Pfshrinks the state
and input sets by the maximum disturbances c
Wmax and
e
Emax. Thus we can tell, before running the system,
whether RMPC might be faced with empty constraint
sets (and thus infeasible optimizations).
•Note that all constraints are linear.
B. The Control Algorithm
We can now describe the algorithm used for solving (7)
by robust receding horizon control.
C. Robust Feasibility and Stability
We are now ready to state the main result of this paper:
namely, that the RMPC of the feedback linearized system
(7) is feasible at all time steps if it starts out feasible, and
that it stabilizes the nonlinear system, for all possible values
of the state estimation error and feedback linearization error.
Theorem 1 (Robust Feasibility): If at some time step
k0≥0, the RMPC optimization Pk0(ˆzk0)is feasible, then
all subsequent optimizations Pk(ˆzk)k > k0are also feasible.
Algorithm 1 RMPC via feedback linearization
Require: System model, X,U,E,W
Offline, compute:
Initial safe sets X0and Z
e
Emax,c
Wmax Sec. IV-C
Cp,PfSec. III-A
Online:
if Pf=∅then
Quit
else
for k= 1,2, . . . do
Get estimate ˆxk, compute ˆzk=T(ˆxk)
Compute Vk+j|k,e
Ek+j|k,c
Wk+j|kSec. IV-B
Compute Zk+j|k, Vk+j|kSec. III-A
(v∗
k|k, . . . , v∗
k+N|k) = Solution of Pk( ˆzk)
vk=v∗
k|k
Apply uk=R(ˆxk)−1[b(ˆxk) + vk]to plant
end for
end if
Moreover, the nonlinear system (1) controlled by algorithm
1 and subject to the disturbances (E,W) satisfies its state
and input constraints at all times k≥k0.
Theorem 2 (Stability): Given an equilibrium point xe∈
X0⊂T−1(Z)of the nonlinear dynamics (1), Algorithm 1
stabilizes the nonlinear system to an invariant set around xe.
The proofs are in the online report [9].
IV. SET D E FIN ITI ONS F OR TH E RMPC
Algorithm 1 and the problem Pk(ˆzk)(7) use a number of
constraint sets to ensure recursive feasibility of the successive
RMPC optimizations, namely: inner approximations of the
admissible input sets Vk+j|k, bounding sets for the (T-
mapped) estimation error e
Ek+j|k, bounding sets for the
process noise c
Wk+j|k, and the largest error and noise sets
e
Emax and c
Wmax. In this section we show how these sets are
defined and computed. Note, our approach is an extension
to [3] as: 1) we compute the feasible set for the states of the
feedback linearized system under a non-trivial diffeomorpism
T, 2) we compute the bounding sets for disturbances while
considering estimation error and process noise, neither of
which are considered in [3]. In addition, due to the presence
of state-estimation error, we compute these sets using an
over-approximation of the reach set, as seen in the following
subsection.
Since we control the system in z-space, we need to
compute a set Z⊂Rnzs.t. z∈Z=⇒x=T−1(z)∈
X. Moreover, to check feasibility at time 0 of the MPC
optimization, we need a subset X0⊂Xs.t. x∈X0=⇒
z=T(x)∈Z. Mapping sets between zand xspaces via
the arbitrary diffeomorphism Thas to be done numerically,
and we show how in the online appendix.
A. Approximating the reach set of the nonlinear system
First we show how to compute an outer-approximation of
the j-step reach set of the nonlinear system, starting at time
k,Xk+j|k. This is needed for computing Vk+j|kand e
Ek+j|k.
…
Fig. 1. The outer-approximated reach sets for xk+j, computed at time
steps k, k + 1, used to compute
e
Ek+j|k,Vk+j|k.
In all but the simplest systems, forward reachable sets
cannot be computed exactly. To approximate them we may
use a reachability tool for nonlinear systems like RTreach
[12]. A reachability tool computes an outer-approximation of
the reachable set of a system starting from some set X ⊂ X,
subject to inputs from a set U, for a duration T≥0. Denote
this approximation by RTT(X, U ), so x(T)∈RTT(X, U)
for all T,x(0) ∈ X and u: [0, T ]→U.
At time k, the state estimate ˆxkis known. Therefore xk=
ˆxk−ek∈ˆxk⊕(−E) := Xk|k. Propagating Xk|kforward one
step through the continuous-time nonlinear dynamics yields
Xk+1|k, which is outer-approximated by RTT(Xk|k, U ). The
state estimate that the system will receive at time k+ 1 is
therefore bound to be in the set RTT(Xk|k, U )⊕E. Since
0∈E, we maintain Xk+1|k⊂RTT(Xk|k, U )⊕E. For
1≤j≤N, we define the j-step over-approximate reach set
computed at time kto be
Xk|k:= ˆxk⊕(−E)
Xk+j|k:= RTT(Xk+j−1|k, U )⊕E⊕(−E)(12)
(The reason for adding the extra −Eterm will be apparent
in the proof to Thm. 1). Fig. 1 shows a visualization of this
approach. The following holds by construction:
Lemma 3: For any time kand step j≥1,Xk+j|k⊂
Xk+j|k.
This construction of Xk+j|kpermits us to prove recursive
feasibility of the RMPC controller, because it causes the
constraints of the RMPC problem setup at time k+ 1 to
be consistent with the constraints of the problem setup at
time k.
B. Approximating the bounding sets for the input
Given x∈X, define the set V(x) := {v∈Rnv|u(x) =
R−1(x)[b(x) + v]∈U}. We assume that there exist
functions vi,vi:X→Rs.t. for any x,V(x) =
{[v1, . . . , vnv]T|vi(x;U)≤vi≤vi(x;U)}. Because in
general V(x)is not a rectangle, we work with inner and
outer rectangular approximations of V(x). Specifically, let
Xbe a subset of X. Define the inner and outer bounding
rectangles, respectively
V(X) := {[v1,...,vnv]T|max
x∈X vi(x;U)≤vi≤min
x∈X
vi(x;U)}
V(X) := {[v1,...,vnv]T|min
x∈X vi(x;U)≤vi≤max
x∈X
vi(x;U)}
By construction, we have for any subset X ⊂ X
V(X)⊆ ∩x∈X V(x)⊂V(X)(13)
Fig. 2. Local and global inner approximations of input constraints for
running example, with Xk+j|k= [−π/4,0] ×[−0.9666,−0.6283] for
some k, j and U= [−2.75,2.75]. Color in online version.
If two subsets of Xsatisfy X1⊂ X2, then it holds that
V(X2)⊂V(X1),V(X1)⊂V(X2)(14)
We can compute:
Vk+j|k=V(Xk+j|k), V inner−g lobal =V(X)(15)
In practice we use interval arithmetic to compute these sets
since Xk+j|kand Uare hyper-intervals. Fig. 2 shows these
sets for the running example.
C. Approximating the bounding sets for the disturbances
We will also need to define containing sets for the state
estimation error in zspace: recall that ˆzk=T(ˆxk) = T(xk+
ek). We use a Taylor expansion (and the remainder theorem)
ˆzk=T(xk) + dT
dx (xk)
| {z }
M(xk)
ek+1
2eT
k
d2T
dx2(c)ek
| {z }
rk(c)
, c ∈xk+E
=T(xk) + M(xk)ek+rk(c), c ∈xk+E
=T(xk) + hk+rk(c), c ∈xk+E
The remainder term rk(c)is bounded in the set
∪c∈{xk}⊕E1
2eTd2T
dx2(c)e. Thus when setting up Pk(ˆzk), at
the jth step, rk+j|k∈Dk+j|k:= ∪c∈Xk+j|k⊕E1
2eTd2T
dx2(c)e,
where Xk+j|kis the reach set computed in (12).
The error hklives in ∪x∈Xk,e∈EM(x)e. Thus when set-
ting up Pk(ˆzk), the error hk+j|klives in ∪x∈Xk+j|kM(x)E.
Finally the rectangular over-approximation of this set is
Hk+j|k={h|
nx
X
`=1
min
x∈Xk+j|k,e∈EMi`(x)e()≤h(i)
≤
nx
X
`=1
max
x∈Xk+j|k,e∈EMi`(x)e()}(16)
where Mi` is the (i, )th element of matrix Mand h()is
the th element of h.
Therefore the state estimation error hk+j|k+rk+j|kis
bounded in the set Hk+j|k⊕Dk+j|k. In the experiments we
ignore the remainder term Dk+j|kbased on the observation
that ekis small relative to the state xk. Thus we use:
e
Ek+j|k=Hj+k|j(17)
Example 2: For the running example (6), we have M=
[1,0; 0,cos(x2)]. If the estimation error e(in radians) is
bounded in E={e|||e||∞≤0.0227}, then the relative
linearization error, averaged over several realizations of the
error, is less than 2·10−3.
We also need to calculate containing sets for the process
noise bw. Recall that for all k , j,ˆzk+j+1 =Aˆzk+j+Bvk+
bwk+j+1 . Therefore
bwk+j+1 ∈c
Wk+j+1|k:= W⊕e
Ek+j+1|k⊕(−Ae
Ek+j|k)(18)
We also define the set e
Emax, which is necessary for the
terminal constraints of Eq. (10). ˜
Emax represents the worst
case bound on the estimation error ˜ek, and is computed
similar to Eq. (16), but over the entire set X.
c
Wmax is then defined as:
c
Wmax =W⊕˜
Emax ⊕(−A˜
Emax)(19)
V. EX PER IME NTS
We evaluate our approach on a 4D flexible joint manip-
ulator and a simple 2-state example (in the online technical
report [9]). We implemented the RMPC controller of Alg.
1 in MATLAB The set computations were done using the
MPT Toolbox [13], and the invariant set computations using
the Matlab Invariant Set Toolbox [14]. The reachability com-
putations for Xk+j|kwere performed on the linear dynamics
and mapped back to x-space. The RMPC optimizations were
formulated in CVX [15] and solved by Gurobi [16].
A. Single link flexible joint manipulator
We consider the single link flexible manipulator system S,
also used in [8] and [17] whose dynamics are given by:
S:
˙x1
˙x2
˙x3
˙x4
=
x2
−mgl
Isin(x1)−σ
I(x1−x3)
x4
σ
J(x1−x3)
+
0
0
0
1
J
u
This models a system where a motor, with an angular
moment of inertia J= 1, is coupled to a uniform thin bar of
mass m= 1/g, length l= 1mand moment of inertia I= 1,
through a flexible torsional string with stiffness σ= 1 and
g= 9.8ms−2. States x1and x2are the angles of the bar
and motor shaft in radians, respectively, and x3, x4are their
respective rotational speeds in radians/sec. The safe set is the
box X= [−π/4, π/4] ×[−π/4, π/4] ×[−π , π]×[−π, π].
The input torque uis bounded in U= [u, u]=[−10,10]N·
m. The estimation error e= ˆx−xis bounded in E=
[−π/180, π/180]4∈R4and W= [−10−3,10−3]4∈R4.
The diffeomorphism Tis given by:
z=T(x) =
x1
x2
−mgl
Isin(x1)−σ
I(x1−x3)
mgl
Ix2cos(x1)−σ
I(x2−x4)
The input to the feedback linearized system is given by
v=βu +α(x)where β=σ
IJ and α(x) = mgl
Ix2
2sin(x1) +
σ2
IJ (x1−x3)−(mg l
Icos(x1)−σ
I)(mgl
Isin(x1)+ σ
I(x1−x3))
The feedback linearized system Sfl has the dynamics: ˙z1=
z2,˙z2=z3,˙z3=z4,˙z4=v.
Fig. 3. The states and their estimates of the feedback linearized and non-
linear manipulator. Note z1=x1and z2=x2. Color in online version.
A global inner approximation of the vinput set is
computed, via interval arithmetic, as Vinner−global =
[maxx∈Xα(x) + βu, minx∈Xα(x) + βu]. Similarly, the
inner approximations Vk+j|kare computed online by in-
terval arithmetic as Vk+j|k= [maxx∈Xk+j|kα(x) +
βu, minx∈Xk+j|kα(x) + βu]. Using the procedure in the
appendix [9] the set of safe states for Sfl is given by Z=
[−0.5121,0.5121]2×[−2.5347,2.5347]×[−2.5603,2.5603].
Also X0= [−0.4655,0.4655]2×[−2.7598,2.7598] ×
[−2.793,2.793]. Comparing it to the set X, it shows that
we can stabilize the system starting from initial states in a
significantly large region in X.
We applied our controller to the above system with a
discretization rate of 10Hz and MPC horizon N= 10. Fig.3
show the states of the feedback linearized system Sfl . They
converge to the origin in the presence of estimation error,
while respecting all constraints. Fig. 3 also shows x3and x4:
they also converge to zero. Fig. 4 shows the input vto Sfl
along with the global inner approximation Vinner−global and
the x-dependent inner approximations at the instant when
the control is applied, Vk|kcomputed online. Note that
the bounds computed online allow for significantly more
control action compared to the conservative global inner
approximation. Finally, Fig. 4 also shows the input uapplied
to the non-linear system (and its bounds), which robustly
respects its constraints u∈U.
VI. DISCUSSION
In this paper we develop the first algorithm for robust
control of a non-linear system with estimation errors and
state and input constraints via feedback linearization and
Robust MPC. Experimental results show that the control
algorithm stabilizes the systems while ensuring robust con-
straint satisfaction.While we only evaluated our approach for
single input systems, the formulation and set computations
involved hold as is for multi-input systems as well.
Limitations of the approach mostly have to do with the
numerical limitations involved in computing the constraint
sets, and potential conservatism of the approximations.
Fig. 4. Inputs vand uand their bounds for the manipulator example.
Color in online version.
APPENDIX
A. Constraints of successive MPC problems
We are now ready to state and prove a key lemma
regarding the evolution of the state, error and input sets
between MPC optimization problems. This lemma will be
key to proving recursive feasibility of the MPC controller,
since it allows us to show that the constraint sets of one
problem, at time k, are appropriate supersets of the constraint
sets of the next problem, at time k+ 1.
Lemma 4: Let Xk+j|kbe the j-step outer-approximate
reach set computed at time kby a reachability tool as
described in Sec. IV-A.
Let c
Wk+j|kbe the set defined in (18).
Let e
Ek+j|kbe the error set computed using (16), (17) by
substituting E←e
Ek|k.
Let Vk+j|k=V(Xk+j|k)and Vk+j|k=V(Xk+j|k)
Then the following hold for all k≥0, , j ≥1:
1) Xk+1+j|k+1 ⊆Xk+j+1|k
2) e
Ek+1+j|k+1 ⊆e
Ek+j+1|k
3) c
Wk+1+j|k+1 ⊆c
Wk+j+1|k
4) Vk+1+j|k+1 ⊆Vk+j+1|k
5) Vk+1+j|k+1 ⊇Vk+j+1|k(note the change in inclusion
direction)
Proof:
1) Fix an arbitrary k. We prove this by induction on j≥1.
Base case: j= 1. By construction, ˆxk+1 ∈
RTT(Xk|k, U )⊕E. Therefore at time k+ 1, when
setting up the problem Pk+1(ˆzk+1 ), the algorithm
will first compute Xk+1|k+1 = ˆxk+1 ⊕(−E)⊂
RTT(Xk|k, U )⊕E⊕(−E) = Xk+1|k. Also
Xk+2|k+1 =RTT(Xk+1|k+1, U )⊕E⊕(−E)⊂
RTT(Xk+1|k, U )⊕E⊕(−E) = Xk+2|k.
Induction step: j > 1. By definition, Xk+1+j|k+1 =
RTT(Xk+1+j−1|k+1, U )⊕E⊕(−E)⊂RTT(Xk+j|k, U )⊕
E⊕(−E)(by the induction hypothesis). This last set equals
Xk+j+1|kby definition.
2) By 1) we have that minx∈Xk+j+1|k,e∈EMi`(x)e()≤
minx∈Xk+1+j|k+1,e∈EMi` (x)e()and
that maxx∈Xk+j+1|k,e∈EMi`(x)e()≤
maxx∈Xk+1+j|k+1,e∈EMi` (x)e()which yields the desired
result.
3) This is immediate from the definition (18) and 2).
4) and 5) These are immediate from (14).
B. Proof of Theorem 1
We will prove the Theorem by recursion by showing that if
at time step k, the problem Pk(ˆzk)is feasible and the feasible
control input vk=v∗
k|kis applied, then vkis admissible
(meets the system constraints) and at time k+ 1,zk+1 is
inside Zand also Pk+1(ˆzk+1 )is feasible for all disturbances.
By recursion then, if we have feasibility at step k=k0, we
have robust constraint satisfaction and feasibility at time step
k0+ 1 and so on for all k > k0.
To begin, let Pk(ˆzk)be feasible, then it has a feasible
solution ({z∗
k+j|k}N+1
j=0 ,{v∗
k+j|k}N
j=0)that satisfies all the
constraints of the Robust MPC. Now let’s construct a feasible
candidate solution for Pk+1(ˆzk+1 )at the next time step by
shifting the above solution one-step forward. Consider the
candidate solution:
¯zk+j+1|k+1 =z∗
k+j+1|k+Ljˆwk+1,∀j∈[0 : N](20a)
¯zk+N+2|k+1 =A¯zk+N+1|k+1 +B¯vk+N+1|k+1 (20b)
¯vk+j+1|k+1 =v∗
k+j+1|k+KLjˆwk+1 ,∀j∈[0 : N−1]
(20c)
¯vk+N+1|k+1 =K¯zk+N+1|k+1 (20d)
First we will show that the input and state constraints are
satisfied by vkand ¯zk+1, then prove feasibility of the above
candidate solution for Pk+1(ˆzk+1 ).
Validity of input and next state: The next state is:
zk+1 =Azk+Bvk+wk
=A(ˆzk−˜ek) + Bv∗
k|k+wk
=Aˆzk+Bv∗
k|k−˜ek+1 + (wk+ ˜ek+1 −A˜ek)
=Az∗
k|k+Bv∗
k|k−˜ek+1 +bwk+1
(Pk(ˆzk)initialization)
=z∗
k+1|k−˜ek+1 + ˆwk+1 (21)
By feasibility of the solution at time k,
z∗
k+1|k∈Zk+1|k=Z(−˜
Ek+1|k)L0c
Wk+1|k
Therefore, zk+1 ∈Zand so xk+1 ∈X.
Moreover, by the feasibility of v∗
k|kfor Pk(ˆzk)and by the
definition of Vk|k,vk=v∗
k|k∈Vk|k, which implies that
uk∈U.
Hence, if Pk(ˆzk)is feasible, then the applied input at time
step kand the resulting next state zk+1 (and hence xk+1) are
admissible under all possible disturbances. The next part of
the proof will focus on showing that the candidate solution
of Eq. (20a) is indeed feasible for Pk+1(ˆzk+1 )by proving
that it meets all the constraints.
Initial Condition: Recall from (5) that ˆzk+1 =Aˆzk+
Bvk+ ˆwk+1 . Also by the construction of the candidate
solution,
¯zk+1|k+1 =z∗
k+1|k+L0ˆwk+1
=Az∗
k|k+Bv∗
k|k+ ˆwk+1 (22a)
Since z∗
k|k= ˆzkand v∗
k|k=vk, by the two equations
above, we have
¯zk+1|k+1 = ˆzk+1 (23)
Hence, the candidate solution does indeed satisfy the
initial condition for Pk+1(ˆzk+1 ). Next we show that the
candidate solution satisfies the nominal dynamics:
Nominal Dynamics: For 0≤j < N ,we have:
¯zk+j+2|k+1
=z∗
k+j+2|k+Lj+1 ˆwk+1
=Az∗
k+j+1 +Bv∗
k+j+1|k+Lj+1 ˆwk+1
By the construction of the candidate solution
=A(¯zk+j+1|k+1 −Ljˆwk+1 ) + B(¯vk+j+1|k+1 −KLjˆwk+1 )
+Lj+1 ˆwk+1
=A¯zk+j+1|k+1 +B¯vk+j+1|k+1 −(A+BK)Ljˆwk+1
+Lj+1 ˆwk+1
=A¯zk+j+1|k+1 +B¯vk+j+1|k+1 −Lj+1 ˆwk+1 +Lj+1 ˆwk+1
=A¯zk+j+1|k+1 +B¯vk+j+1|k+1
For j=N, by construction ¯zk+N+2|k+1 =
A¯zk+N+1|k+1 +B¯vk+N+1|k+1. Hence, the candidate solu-
tion does indeed satisfy the nominal dynamics.
State Constraints: To show feasibility of the candidate
solution w.r.t the state constraints, we need to show that
¯z(k+1)+j|k+1 ∈Zk+1+j|k+1 ∀j= 0, . . . , N . Re-writing Eq.8
for Pk(ˆzk)for j= 0, . . . , N −1, we have:
Zk+j+1|k
=Zj
i=0 Lic
Wk+j+1−i|k(−˜
Ek+j+1|k)
=ZLjc
Wk+1|kj
i=1 Lic
Wk+j+1−i|k(−˜
Ek+j+1|k)
=ZLjc
Wk+1|kj−1
i=0 Lic
Wk+j−i|k(−˜
Ek+j+1|k)
Also, let us write the state constraints for all j= 0, . . . , N
for the problem at time k+ 1, i.e. for Pk+1(ˆzk+1 ):
Z(k+1)+j|k+1 =Zj−1
i=0 Lic
Wk+j−i|k+1 (−˜
Ek+1+j|k+1)
Remember, by construction of the candidate, we have
¯zk+j+1|k+1 =z∗
k+j+1|k+Ljˆwk+1. Also by feasibility of
the algorithm at time k, we have z∗
k+j+1|k∈Zk+j+1|k, and
by definition, Ljˆwk+1 ∈Ljc
Wk+1|k. Therefore by Eq. (24),
we have ∀j= 0, . . . , N −1,
¯z(k+1)+j|k+1 ∈Zj−1
i=0 Lic
Wk+j−i|k(−˜
Ek+j+1|k)(25)
Using points 2) and 3) from Lemma 4,
Zj−1
i=0 Lic
Wk+j−i|k(−˜
Ek+j+1|k)
⊆Zj−1
i=0 Lic
Wk+j−i|k+1 (−˜
Ek+j+1|k+1)
And using Eq. (VI-B), this implies for all j= 0, . . . , N −1
¯z(k+1)+j|k+1 ∈Zk+1+j|k+1
Now for j=N,¯zk+N+1|k+1 =z∗
k+N+1|k+LNˆwk+1.
From the terminal constraint we have [z∗
k+N+1|kv∗
k+N|k]∈
Pf=Cpˆ
LNˆ
Fc
Wmax. Since wk+1 ∈c
Wmax, and by the
construction of the candidate solution
[¯zk+N+1|k+1 ¯vk+N|k+1]∈Cp(26)
Remember, by definition of the invariant set, Cp∈
PN(˜
Emax,˜
Emax), and since by definition of ˜
Emax and
Eq. 8, we have PN(˜
Emax,˜
Emax)⊆Zk+1+N|k+1 ×
Vk+1+N|k+1, or Cp∈Zk+1+N|k+1 ×Vk+1+N|k+1 . This
implies that ¯zk+N+1|k+1 ∈Zk+1+N|k+1 and additionally,
vk+N|k+1 ∈Vk+1+N|k+1. Therefore, the set constraints are
met by candidate solution ∀j= 0, . . . , N .
Input Constraints: For the inputs, we show that the can-
didate solution, ¯vk+j+1|k+1, j = 0, . . . , N −2, satisfies the
input constraints for Pk+1(ˆzk+1 )by using a similar argument
as that used for the state constraints. Let us re-write the input
constraints for Pk(ˆzk)for j= 0, . . . , N −2,
Vk+j+1|k=Vk+j+1|kj
i=0 KLic
Wk+j+1−i|k(27a)
=Vk+j+1|kKLjWk+1|kj
i=1 KLic
Wk+j+1−i|k
(27b)
=Vk+j+1|kKLjWk+1|kj−1
i=0 KLic
Wk+j−i|k
(27c)
Let us also re-write the input constraints for Pk+1(ˆzk+1 )
for j= 0, . . . , N −1,
Vk+1+j|k+1 =Vk+j+1|k+1 j−1
i=0 KLic
Wk+j−i|k+1 (28)
By construction of the candidate, we have ¯vk+1+j|k+1 =
v∗
k+j+1|k+KLjˆwk+1 . Also by feasibility of the algorithm
at time k, we have v∗
k+j+1|k∈Vk+j+1|k, and by definition,
Ljˆwk+1 ∈Ljc
Wk+1|k. Therefore by definition of the Pon-
traygin difference and Eq. 27, we have ∀j= 1, . . . , N −1,
¯v(k+1)+j|k+1 ∈Vk+j+1|kj−1
i=0 Lic
Wk+j−1|k(29a)
Using points 3) and 4) from Lemma ??
Vk+j+1|kj−1
i=0 Lic
Wk+j−1|k⊆
Vk+j+1|k+1 j−1
i=0 Lic
Wk+j−1|k+1 (29b)
And using Eq. 28, this implies
¯v(k+1)+j|k+1 ∈Vk+1+j|k+1 (29c)
Note, for j=N−1, we have already shown in the proof
for the state constraints that by definition of the invariant
set C,vk+N|k+1 ∈Vk+1+N−1|k+1 by respecting an even
tighter constraint. For the last input for j=N, we have
¯vk+1+N|k+1 =K¯zk+N+1|k, we show that it is inside the
(joint) terminal constraint Pf, and hence is feasible.
Terminal Constraints: Finally, we need to show that
[¯zk+N+2 ¯vk+N+1]0∈Pf. This can be shown using the
construction of the terminal set and the candidate solution.
From Equation 20a, we have:
¯zk+N+2|k+1 =A¯zk+N+1|k+1 +B¯vk+N+1|k(30a)
¯vk+N+1|k+1 =K¯zk+N+1|k+1 (30b)
Concatenate these two into pk+N+2|k+1 =
[¯zk+N+2|k+1 ¯vk+N+1|k+1]0. Also pk+N+1|k+1 =
[¯zk+N+1 ¯vk+N]Twas in Cpas shown previously (Eq. 26).
Therefore, by definition of the invariant set Cp(Equation
11), we have that pk+N+2|k+1 +ˆ
LNˆ
F wk+1|k∈Cp
for all wk+1|k∈c
Wk+1|k⊆c
Wmax. Therefore
pk+N+2|k+1 ∈Cpˆ
LNˆ
Fc
Wmax =Pf. Therefore the
terminal constraint is also met.
With this, we have the proof for Theorem 1 as we have
shown that feasible solution at time step kfor Pk(ˆzk)implies
that the applied input vkis feasible, the next state zk+1 ∈Z
and the problem Pk+1(ˆzk+1 )is feasible at time k+ 1, and
hence Pk+2(ˆzk+2 )is feasible for time step k+ 2 and so on.
C. Proof of Thm. 2
Let Tbe the diffeomorphism mapping xto zfrom
feedback linearization, and set ze=T(xe). Since xeis an
equilibrium point, ze= 0. Recall that Qand Qfof (7) are
positive semi-definite and that Ris positive definite, so that
the optimal cost J∗(¯zk)is a positive definite function of
¯zk, and that the terminal weight in (7) is equivalent to the
infinite horizon cost (by our choice of Qf). Finally Thm. 1
guarantees that the tail of the input sequence computed at k
is admissible at time k+ 1. Therefore it is a standard result
that the optimal cost J∗(¯zk)is non-increasing in kand that
0is a stable equilibrium for the closed-loop linear system
(e.g., see [11] ). Moreover, the terminal set Pfis a robust
invariant set of the zdynamics containing 0 (see Section III-
A). Therefore Algorithm 1 stabilizes the nominal state ¯zto
Pffrom anywhere in Z0, and the true (linearized) state zto
an invariant set Zinv around 0, and the nonlinear state xto
the invariant set Xinv =T−1(Zinv). Therefore Algorithm 1
drives xto Xinv from anywhere in X0⊂T−1(Z).
D. Transforming between x-space and z-space
Since we control the system in z-space, we need to
compute a set Z⊂Rnzs.t. z∈Z=⇒x=T−1(z)∈X,
i.e. Z⊂T(X). Thus keeping the state zof the linearized
dynamics in Zimplies the nonlinear system’s state xremains
in X. Moreover, to check feasibility at time 0 of the MPC
optimization, and for stability of the nonlinear dynamics,
we need a subset X0⊂Xs.t. x∈X0=⇒z=
T(x)∈Z, i.e. X0⊂T−1(Z). Because Tcan be an arbitrary
diffeomorphism Zand X0have to computed numerically.
1) Let Z1⊂Rnzbe the rectangle with bounds in the
ith dimension [minx∈XTi(x),maxx∈XTi(x)],i=
1, . . . , nx. This over-approximates T(X). Next we
need to prune it so it under-approximates T(X).
2) Define zin := min{kzk0|z∈Z1, T −1(z)/∈X}.zin
is the smallest-norm inadmissible zin Z1. Thus all
points in the 0-ball of radius kzink,Bz(0,kzin k), are
admissible, i.e. their pre-images via T−1are in X.
Fig. 5. The error sets ˜
Emax and ˜
Ecomputed over an arbitrary Xk+j|k.
Also shown are realizations of ˜e:= T(ˆx)−T(x)for randomly chosen
x∈ X. Color in online version.
3) Let Rzbe the largest inscribed rectangle in
Bz(0,kzink). Now we need to get the x-set that maps
to Rz(or a subset of it).
4) Let X1⊂Xbe the rectangle with bounds in the ith di-
mension [minz∈RzT−1
i(z),maxz∈RzT−1
i(z)]. Again,
this is an over-approximation of T−1(Rz), so it needs
to be pruned.
5) Define xin = inf{kxk0|x∈X1, T(x)/∈Rz}. Then
every point in the 0-ball Bx(0,kxink)⊂Xmaps via
Tto Rz
Therefore we choose Z=Rzand X0to be the largest
inscribed rectangle in Bx(0,kxink).
E. Error sets
For the running example, Fig. 5 shows the set e
Emax and
˜
Ek+j|kcomputed by Eqs. (17) and (??). for an arbitrary
Xk+j|k= [−π/4,0] ×[−0.9666,−0.6283]. It also shows
1000 randomly generated values for T(ˆx)−x(for randomly
generated e∈Eand x∈Xk+j|k), and all fall inside ˜
Ek+j|k.
F. Experiments with the Running example
For the running example of Eq. 6, we discretize the feed-
back linearized system at 10Hz and formulate the controller
with a horizon of N= 15 steps. The cost function has
parameters Q=Iand R= 10−2, and W= [−10−2,10−2]2.
The state trajectories (and estimates) for the nonlinear and
linearized systems are shown in Fig. 6. Note that the states
converge to the equilibrium 0. The input uis shown in Fig.
7, and it can be noted that uk∈Ufor all k.
REFERENCES
[1] M. Cannon, “Efficient nonlinear model predictive control algorithms,”
Annual Reviews in Control, vol. 28, no. 2, pp. 229 – 237, 2004.
[2] H. Khalil, “Nonlinear systems,” in Prentice Hall, 2002.
[3] D. Simon, J. Lofberg, and T. Glad, “Nonlinear model predictive
control using feedback linearization and local inner convex constraint
approximations,” in Control Conference (ECC), 2013 European, July
2013, pp. 2056–2061.
Fig. 6. The states and their estimates of the feedback linearized and non-
linear running example. Recall that z1=x1therefore to reduce clutter, we
only plot the first state only for the feedback linearized system. Color in
online version.
Fig. 7. Inputs vand uand their bounds for the running example. Color
in online version.
[4] A. Richards and J. How, “Robust model predictive control with imper-
fect information,” in American Control Conference, 2005. Proceedings
of the 2005, June 2005, pp. 268–273.
[5] D. Mayne and E. Kerrigan, “Tube-based Robust Nonlinear Model Pre-
dictive Control ,” in IFAC Symposium on Nonlinear Control Systems,
2007.
[6] S. Streif, M. Kogel, T. Bathge, and R. Findeisen, “Robust Nonlinear
Model Predictive Control with Constraint Satisfaction: A Relaxation-
based Approach,” in IFAC World Congress, 2014.
[7] W. Zhao and T. H. Go, “Quadcopter formation flight control combining
mpc and robust feedback linearization,” Journal of the Franklin
Institute, vol. 351, no. 3, pp. 1335 – 1355, 2014.
[8] W. Son, J. Choi, and O. Kwon, “Robust control of feedback lineariz-
able system with the parameter uncertainty and input constraint,” in
Proceedings of the 40th SICE Annual Conference, 2001.
[9] Y. V. Pant, H. Abbas, and R. Mangharam, “Tech report: Robust
model predictive control for non-linear systems with input and
state constraints via feedback linearization,” March 2016. [Online].
Available: http://tinyurl.com/nlrmpcwisc
[10] Y. V. Pant, K. Mohta, H. Abbas, T. X. Nghiem, J. Devietti, and
R. Mangharam, “Co-design of anytime computation and robust con-
trol,” in RTSS, Dec 2015, pp. 43–52.
[11] B. Kouvaritakis and M. Cannon, Model Predictive Control: Classical,
Robust and Stochastic. Springer Verlag, 2015.
[12] T. T. Johnson, S. Bak, M. Caccamo, and L. Sha, “Real-time
reachability for verified simplex design,” ACM Trans. Embed.
Comput. Syst., vol. 15, no. 2, pp. 26:1–26:27, Feb. 2016. [Online].
Available: http://doi.acm.org/10.1145/2723871
[13] M. Herceg, M. Kvasnica, C. Jones, and M. Morari, “Multi-Parametric
Toolbox 3.0,” in Proc. of the ECC, Z ¨
urich, Switzerland, July 17–19
2013, pp. 502–510, http://control.ee.ethz.ch/∼mpt.
[14] E. Kerrigan, “Matlab invariant set toolbox version 0.10.5,” 2016.
[Online]. Available: {http://www-control.eng.cam.ac.uk/eck21/matlab/
invsetbox}
[15] M. Grant and S. Boyd, “CVX: Matlab software for disciplined convex
programming, version 2.0 beta,” http://cvxr.com/cvx, Sep. 2013.
[16] I. Gurobi Optimization, “Gurobi optimizer reference manual,” 2015.
[Online]. Available: http://www.gurobi.com
[17] M. Seidi, M. Hajiaghamemar, and B. Segee, “Fuzzy Control Systems:
LMI-Based Design,” Fuzzy Controllers- Recent Advances in Theory
and Applications, InTech, 2012.