Conference PaperPDF Available

Robust model predictive control for non-linear systems with input and state constraints via feedback linearization

Robust Model Predictive Control for Non-Linear Systems with Input and
State Constraints Via Feedback Linearization
Yash Vardhan Pant, Houssam Abbas, Rahul Mangharam
Abstract Robust predictive control of non-linear systems
under state estimation errors and input and state constraints
is a challenging problem, and solutions to it have generally
involved solving computationally hard non-linear optimizations.
Feedback linearization has reduced the computational burden,
but has not yet been solved for robust model predictive control
under estimation errors and constraints. In this paper, we solve
this problem of robust control of a non-linear system under
bounded state estimation errors and input and state constraints
using feedback linearization. We do so by developing robust
constraints on the feedback linearized system such that the
non-linear system respects its constraints. These constraints
are computed at run-time using online reachability, and are
linear in the optimization variables, resulting in a Quadratic
Program with linear constraints. We also provide robust feasi-
bility, recursive feasibility and stability results for our control
algorithm. We evaluate our approach on two systems to show
its applicability and performance.
In this paper we are concerned with the problem of
controlling nonlinear dynamical systems Sof the form ˙x=
f(x) + G(x)uunder state and input constraints, and subject
to errors in the state estimate. This problem is formulated as
s.t. ˙x=f(x) + G(x)u
xX, u U
where l(x,u)is a cost function whose minimization over the
state and input trajectories xand uensures stability of the
system. Sets XRnxand URnuencode constraints on
the state (e.g., safety) and the input. The input u=ux)is
a function of a state estimate that in general differs from the
true state of the system.
The application of Model Predictive Control (MPC) to
nonlinear systems involves the repeated solution of gener-
ally non-quadratic, non-convex optimizations. Various ap-
proaches for solving (or approximately solving) the opti-
mizations and their trade-offs are reviewed in [1]. Another
approach is to first feedback linearize the system S[2]:
namely, the applied control u=u(x, v)is designed in such
a way that the resulting closed-loop dynamics Sfl are now
linear:Sf l : ˙z=Az +Bv. The input vto the linearized
dynamics can now be computed so as to optimize system
performance and ensure stability. The state zof the linearized
*This work was supported by STARnet a Semiconductor Research Cor-
poration program sponsored by MARCO and DARPA, NSF MRI-0923518
and the US Department of Transportation University Transportation Center
The Department of Electrical and Systems Engineer-
ing, University of Pennsylvania, Philadelphia, U.S.A.
system Sfl is related to the state xof the nonlinear system
Svia a (system-specific) function T:z=T(x).
Previous work on nonlinear MPC with feedback lin-
earization assumed the state x(t)is perfectly known to the
controller at any moment in time [3]. However in many cases,
only a state estimate ˆx(t)is available, and ˆx(t)6=x(t),
and we handle such cases. Robust MPC (RMPC) has been
investigated as a way of handling state estimation errors
for linear [4] and nonlinear systems [5], [6], but not via
feedback linearization. In particular, for non-linear systems,
[5] develops a non-linear MPC with tube-like constraints
for robust feasibility, but involves solving two (non-convex)
optimal control problems. In [6], the authors solve a non-
linear Robust MPC through a bi-level optimization that in-
volves solving a non-linear, non-smooth optimization which
is challenging. [6] also guarantees a weaker form of recursive
feasibility than [4] and what we guarantee in this work.
In [7] the authors approximate the non-linear dynamics
of a quadrotor by linearizing it around hover and apply
the RMPC of [4] to the linearized dynamics. This differs
significantly from our approach, where we formulate the
RMPC on the feedback linearized dynamics directly, and
not on the dynamics obtained via Jacobian linearization of
the non-linear system. Existing work on MPC via feedback
linearization and input/state constraints has also assumed that
either Tis the identity [3], or, in the case of uncertainties
in the parameters, that there are no state constraints [8]. A
non-identity Tis problematic when the state is not perfectly
known, since the state estimation error e= ˆxxmaps to
the linearized dynamics via Tin non-trivial ways, greatly
complicating the analysis. In particular, the error bounds for
the state estimate in z-space now depend on the current
nonlinear state x. One of the complications introduced by
feedback linearization is that the bounds on the input (uU)
may become a non-convex state-dependent constraint on
the input vto Sfl:V={v(x, U)vv(x, U)}. In
[3] forward reachability is used to provide inner convex
approximations to the input set V. A non-identity Tincreases
the computational burden since the non-linear reach set must
be computed (with an identity T, the feedback linearized
reach set is sufficient).
Contributions: We develop a feedback linearization so-
lution to the above control problem, with state estimation
errors, input and state constraints, and non-identity T. To the
best of our knowledge, this is the first feedback linearization
solution to this problem. The resulting control problem is
solved by RMPC with time-varying linear constraint sets.
The paper is organized as follows: in the next section we
formulate the feedback linearized control problem. In Sec.
III, we describe the RMPC algorithm we use to solve it, and
prove that it stabilizes the nonlinear system. Sec. IV shows
how to compute the various constraint sets involved in the
RMPC formulation, and Sec. V applies our approach to an
example. Sec. VI concludes the paper. An online technical
report [9] contains proofs and more examples.
A common method for control of nonlinear systems is
Feedback linearization [2]. Briefly, in feedback linearization,
one applies the feedback law u(x, v) = R(x)1(b(x) + v)
to (1), so that the resulting dynamics, expressed in terms of
the transformed state z=T(x), are linear time-invariant:
Sfl : ˙z=Acz+Bcv(2)
By using the remaining control authority in vto control Sfl,
we can effectively control the non-linear system for, say,
stability or reference tracking. Tis a diffeomorphism [2] over
a domain DX. The original and transformed states, xand
z, have the same dimension, as do uand v, i.e. nx=nzand
nu=nv. Because we are controlling the feedback linearized
system, we must find constraint sets Zand Vfor the state
zand input v, respectively, such that (z, v)Z×V=
(T1(z), u(T1(z), v)) X×U. We assume that the system
(1) has no zero dynamics [2] and all states are controllable. In
case there are zero dynamics, then our approach is applicable
to the controllable subset of the states as long as the span of
the rows of G(x)is involutive [2].
For feedback linearizing and controlling (1), only a pe-
riodic state estimate ˆxof xis available. This estimate is
available periodically every τtime units, so we may write
ˆxk:= ˆx(kτ) = xk+ek, where xkand ekare sampled state
and error respectively. We assume that ekis in a bounded set
Efor all k. This implies that the feedback linearized system
can be represented in discrete-time: zk+1 =Azk+Bzk.
The corresponding z-space estimate ˆzkis ˆzk=Txk). In
general the z-space error ˜ek:= Txk)T(xk)is bounded
for every kbut does not necessarily lie in E. Let e
Ekbe the
set containing ˜ek: in Sec. IV-C we show how to compute it.
Because the linearizing control operates on the state estimate
and not xk, we add a process noise term to the linearized,
discrete-time dynamics. Our system model is therefore
zk+1 =Azk+Bvk+wk(3)
where the noise term wklies in the bounded set Wfor all
k. An estimate of Wcan be obtained using the techniques
of this paper. The problem (1) is therefore replaced by:
z,vq(z,v) =
s.t. zk+1 =Azk+Bvk+wk
zkZ, vkV, wkW
In general, the cost function l(x,u)6=q(z,v). The objective
function in (4) is a choice the control designer makes. For
more details and when the quadratic form of q(z,v)is
justified, see [3]. In Thm. 2, we show that minimizing this
cost function, q(z,v), implies stability of the system.
It is easy to derive the dynamics of the state estimate ˆzk:
ˆzk+1 =zk+1 + ˜ek+1 (5)
=Azk+Bvk+wk+ ˜ek+1
=Aˆzk+Bvk+ (wk+ ˜ek+1 A˜ek)
where bwk+1 =wk+ ˜ek+1A˜ek, and lies in the set c
Wk+1 :=
Ek+1 (Ae
Example 1: Consider the 2D system
˙x1= sin(x2),˙x2=x2
The safe set for xis given as X={|x1| ≤ π/2,|x2| ≤ π/3},
and the input set is U= [2.75,2.75]. For the measurement
y=h(x) = x1, the system can be feedback linearized on the
domain D={x|cos(x2)6= 0}, where it has a relative degree
of ρ= 2. The corresponding linearizing feedback input is
u=tan(x2)+(cos(x2))v. The feedback linearized system
is ˙z1=z2,˙z2=v, where Tis given by z=T((x1, x2)) =
(x1,sin(x2)). We can analytically compute the safe set in
z-space as Z=T(X) = {|z1| ≤ π/2,|z2| ≤ 0.8660}.
For a more complicated T, it is not possible to obtain
analytical expressions for Z. The computation of Zin this
more general case is addressed in the online appendix [9].
Notation. Given two subsets A, B of Rn, their Minkowski
sum is AB:= {a+b|aA, b B}. Their Pontryagin
difference is AB={cRn|c+bAbB}. Given
integers nm,[n:m] := {n, n + 1, . . . , m}.
Assumption. Our approach applies when X, U, E and W
are arbitrary convex polytopes (i.e. bounded intersections of
half-spaces). For the sake of simplicity, in this paper we
assume they are all hyper-rectangles that contain the origin
Following [4], [10], we formulate a Robust MPC (RMPC)
controller of (4) via constraint restriction. We outline the
idea before providing the technical details. The key idea
is to move the effects of estimation error ˜ekand process
noise wk(the ‘disturbances’) to the constraints, and work
with the nominal (i.e., disturbance-free) dynamics: ¯zk+1 =
A¯zk+Bvk,¯z0= ˆz0. Because we would be optimizing
over disturbance-free states, we must account for the noise
in the constraints. Specifically, rather than require the next
(nominal) state ¯zk+1 to be in Z, we require it to be in the
shrunk set Zc
Ek+1|k: by definition of Pontryagin
difference, this implies that whatever the actual value of the
noise bwk+1 c
Wk+1|kand of the estimation error ˜ek+1
Ek+1|k, the actual state zk+1 will be in Z. This is repeated
over the entire MPC prediction horizon j= 1, . . . , N, with
further shrinking at every step. For further steps (j > 1), the
process noise bwk+j|kis propagated through the dynamics,
so the shrinking term c
Wis shaped by a stabilizing feedback
controller ¯z7→ K¯z. At the final step (j=N+ 1), a terminal
constraint is derived using the worst case estimation error
set e
Emax and a global inner approximation for the input
constraints, Vinnerglobal .
Through this successive constraint tightening we ensure
robust safety and feasibility of the feedback linearized system
(and hence of the non-linear system). Since we use just the
nominal dynamics, and show that the tightened constraints
are linear in the state and inputs, we still solve a Quadratic
Program (QP) for the RMPC optimization. The difficulty of
applying RMPC in our setting is that the amounts by which
the various sets are shrunk vary with time because of the
time-varying state estimation error, are state-dependent, and
involve set computations with the non-convexity preserving
mapping T. One of our contributions in this paper is to
establish recursive feasibility of RMPC with time-varying
constraint sets.
The RMPC optimization Pkzk)for solving (4) is:
Jzk) = min
+ ¯zT
¯zk|k= ˆzk(7b)
¯zk+j+1|k=A¯zk+j|k+Bvk+j|k, j = 0, . . . , N (7c)
¯zk+j|kZk+j|k, j = 0, . . . , N (7d)
vk+j|kVk+j|k, j = 0, . . . , N 1(7e)
pN+1 = [zk+N+1|k, vk+N|k]TPf(7f)
Here, ¯zis the state of the nominal feedback linearized
system. The cost and constraints of the optimization are
explained below:
Eq. (7a) shows a cost quadratic in ¯zand v, where
as usual Qis positive semi-definite and Ris positive
definite. In the terminal cost term, Qfis the solution of
the Lyapunov equation Qf(A+BK)TQf(A+BK) =
Q+KTRK. This choice guarantees that the terminal
cost equals the infinite horizon cost under a linear
feedback control ¯z7→ K¯z[11].
Eq. (7b) initializes the nominal state with the current
state estimate.
Eq. (7c) gives the nominal dynamics of the discretized
feedback linearized system.
Eq. (7d) tightens the admissible set of the nominal state
by a sequence of shrinking sets.
Eq. (7e) constrains vk+j|ksuch that the corresponding
u(x, v)is admissible, and the RMPC is recursively
Eq. (7f) constrains the final input and nominal state to
be within a terminal set Pf.
The details of these sets’ definitions and computations are
given in Sec. IV.
A. State and Input Constraints for the Robust MPC
The state and input constraints for the RMPC are defined
as follows:
The state constraints Zk+j|k:The tightened state con-
straints are functions of the error sets e
Ek+j|kand disturbance
sets c
Wk+j|k, and defined j= 0, . . . , N
i=0 (Li
(Recall Zis a subset of T(X),c
Wk+j|kand e
the estimation error and noise, resp., and are formally defined
in Sec. IV). The state transition matrix Lj,j= 0, . . . , N
is defined as L0=I, Lj+1 = (A+BK)Lj. The intuition
behind this construction was given at the start of this section.
The input constraints Vk+j|k:j= 0, ..., N 1
i=0 KLic
where Vk+j|kis an inner-approximation of the set of admis-
sible inputs vat prediction step j+k|k, as defined in Sec.
IV-B. The intuition behind this construction is similar to that
of Zk+j|j: given the inner approximation Vk|k, it is further
shrunk at each prediction step jby propagating forward the
noise bwkthrough the dynamics, and shaped according to the
stabilizing feedback law K, following [4].
The terminal constraint Pf:This constrains the extended
state pk= [¯zk, vk1]T, and is given by
Wmax (10)
where c
Wmax Rnzis a bounding set on the worst-case
disturbance (we show how it’s computed in Sec. IV-C),
and CpRnz×Rnvis an invariant set of the nominal
dynamics subject to the stabilizing controller ¯z7→ K¯z,
naturally extended to the extended state p: that is, there exists
a feedback control law p7→ b
Kp, such that pCp
Ap +b
Kp +b
Wmax (11)
with b
LN= ( b
K)N. It is important to note
the following:
The set Pfcan be computed offline since it depends on
Emax and the global inner approximation for
the constraints on v,Vinnerglobal , all of which can be
computed offline.
If Pfis non-empty, then all intermediate sets that appear
in (7) are also non-empty, since Pfshrinks the state
and input sets by the maximum disturbances c
Wmax and
Emax. Thus we can tell, before running the system,
whether RMPC might be faced with empty constraint
sets (and thus infeasible optimizations).
Note that all constraints are linear.
B. The Control Algorithm
We can now describe the algorithm used for solving (7)
by robust receding horizon control.
C. Robust Feasibility and Stability
We are now ready to state the main result of this paper:
namely, that the RMPC of the feedback linearized system
(7) is feasible at all time steps if it starts out feasible, and
that it stabilizes the nonlinear system, for all possible values
of the state estimation error and feedback linearization error.
Theorem 1 (Robust Feasibility): If at some time step
k00, the RMPC optimization Pk0zk0)is feasible, then
all subsequent optimizations Pkzk)k > k0are also feasible.
Algorithm 1 RMPC via feedback linearization
Require: System model, X,U,E,W
Offline, compute:
Initial safe sets X0and Z
Wmax Sec. IV-C
Cp,PfSec. III-A
if Pf=then
for k= 1,2, . . . do
Get estimate ˆxk, compute ˆzk=T(ˆxk)
Compute Vk+j|k,e
Wk+j|kSec. IV-B
Compute Zk+j|k, Vk+j|kSec. III-A
k|k, . . . , v
k+N|k) = Solution of Pk( ˆzk)
Apply uk=Rxk)1[bxk) + vk]to plant
end for
end if
Moreover, the nonlinear system (1) controlled by algorithm
1 and subject to the disturbances (E,W) satisfies its state
and input constraints at all times kk0.
Theorem 2 (Stability): Given an equilibrium point xe
X0T1(Z)of the nonlinear dynamics (1), Algorithm 1
stabilizes the nonlinear system to an invariant set around xe.
The proofs are in the online report [9].
Algorithm 1 and the problem Pkzk)(7) use a number of
constraint sets to ensure recursive feasibility of the successive
RMPC optimizations, namely: inner approximations of the
admissible input sets Vk+j|k, bounding sets for the (T-
mapped) estimation error e
Ek+j|k, bounding sets for the
process noise c
Wk+j|k, and the largest error and noise sets
Emax and c
Wmax. In this section we show how these sets are
defined and computed. Note, our approach is an extension
to [3] as: 1) we compute the feasible set for the states of the
feedback linearized system under a non-trivial diffeomorpism
T, 2) we compute the bounding sets for disturbances while
considering estimation error and process noise, neither of
which are considered in [3]. In addition, due to the presence
of state-estimation error, we compute these sets using an
over-approximation of the reach set, as seen in the following
Since we control the system in z-space, we need to
compute a set ZRnzs.t. zZ=x=T1(z)
X. Moreover, to check feasibility at time 0 of the MPC
optimization, we need a subset X0Xs.t. xX0=
z=T(x)Z. Mapping sets between zand xspaces via
the arbitrary diffeomorphism Thas to be done numerically,
and we show how in the online appendix.
A. Approximating the reach set of the nonlinear system
First we show how to compute an outer-approximation of
the j-step reach set of the nonlinear system, starting at time
k,Xk+j|k. This is needed for computing Vk+j|kand e
  
 
      
  
 
  
 
 
 
Fig. 1. The outer-approximated reach sets for xk+j, computed at time
steps k, k + 1, used to compute
In all but the simplest systems, forward reachable sets
cannot be computed exactly. To approximate them we may
use a reachability tool for nonlinear systems like RTreach
[12]. A reachability tool computes an outer-approximation of
the reachable set of a system starting from some set X X,
subject to inputs from a set U, for a duration T0. Denote
this approximation by RTT(X, U ), so x(T)RTT(X, U)
for all T,x(0) ∈ X and u: [0, T ]U.
At time k, the state estimate ˆxkis known. Therefore xk=
ˆxkekˆxk(E) := Xk|k. Propagating Xk|kforward one
step through the continuous-time nonlinear dynamics yields
Xk+1|k, which is outer-approximated by RTT(Xk|k, U ). The
state estimate that the system will receive at time k+ 1 is
therefore bound to be in the set RTT(Xk|k, U )E. Since
0E, we maintain Xk+1|kRTT(Xk|k, U )E. For
1jN, we define the j-step over-approximate reach set
computed at time kto be
Xk|k:= ˆxk(E)
Xk+j|k:= RTT(Xk+j1|k, U )E(E)(12)
(The reason for adding the extra Eterm will be apparent
in the proof to Thm. 1). Fig. 1 shows a visualization of this
approach. The following holds by construction:
Lemma 3: For any time kand step j1,Xk+j|k
This construction of Xk+j|kpermits us to prove recursive
feasibility of the RMPC controller, because it causes the
constraints of the RMPC problem setup at time k+ 1 to
be consistent with the constraints of the problem setup at
time k.
B. Approximating the bounding sets for the input
Given xX, define the set V(x) := {vRnv|u(x) =
R1(x)[b(x) + v]U}. We assume that there exist
functions vi,vi:XRs.t. for any x,V(x) =
{[v1, . . . , vnv]T|vi(x;U)vivi(x;U)}. Because in
general V(x)is not a rectangle, we work with inner and
outer rectangular approximations of V(x). Specifically, let
Xbe a subset of X. Define the inner and outer bounding
rectangles, respectively
V(X) := {[v1,...,vnv]T|max
x∈X vi(x;U)vimin
V(X) := {[v1,...,vnv]T|min
x∈X vi(x;U)vimax
By construction, we have for any subset X X
V(X)⊆ ∩x∈X V(x)V(X)(13)
Fig. 2. Local and global inner approximations of input constraints for
running example, with Xk+j|k= [π/4,0] ×[0.9666,0.6283] for
some k, j and U= [2.75,2.75]. Color in online version.
If two subsets of Xsatisfy X1⊂ X2, then it holds that
We can compute:
Vk+j|k=V(Xk+j|k), V innerg lobal =V(X)(15)
In practice we use interval arithmetic to compute these sets
since Xk+j|kand Uare hyper-intervals. Fig. 2 shows these
sets for the running example.
C. Approximating the bounding sets for the disturbances
We will also need to define containing sets for the state
estimation error in zspace: recall that ˆzk=Txk) = T(xk+
ek). We use a Taylor expansion (and the remainder theorem)
ˆzk=T(xk) + dT
dx (xk)
| {z }
| {z }
, c xk+E
=T(xk) + M(xk)ek+rk(c), c xk+E
=T(xk) + hk+rk(c), c xk+E
The remainder term rk(c)is bounded in the set
dx2(c)e. Thus when setting up Pkzk), at
the jth step, rk+j|kDk+j|k:= cXk+j|kE1
where Xk+j|kis the reach set computed in (12).
The error hklives in xXk,eEM(x)e. Thus when set-
ting up Pkzk), the error hk+j|klives in xXk+j|kM(x)E.
Finally the rectangular over-approximation of this set is
where Mi` is the (i, )th element of matrix Mand h()is
the th element of h.
Therefore the state estimation error hk+j|k+rk+j|kis
bounded in the set Hk+j|kDk+j|k. In the experiments we
ignore the remainder term Dk+j|kbased on the observation
that ekis small relative to the state xk. Thus we use:
Example 2: For the running example (6), we have M=
[1,0; 0,cos(x2)]. If the estimation error e(in radians) is
bounded in E={e|||e||0.0227}, then the relative
linearization error, averaged over several realizations of the
error, is less than 2·103.
We also need to calculate containing sets for the process
noise bw. Recall that for all k , j,ˆzk+j+1 =Aˆzk+j+Bvk+
bwk+j+1 . Therefore
bwk+j+1 c
Wk+j+1|k:= We
We also define the set e
Emax, which is necessary for the
terminal constraints of Eq. (10). ˜
Emax represents the worst
case bound on the estimation error ˜ek, and is computed
similar to Eq. (16), but over the entire set X.
Wmax is then defined as:
Wmax =W˜
Emax (A˜
We evaluate our approach on a 4D flexible joint manip-
ulator and a simple 2-state example (in the online technical
report [9]). We implemented the RMPC controller of Alg.
1 in MATLAB The set computations were done using the
MPT Toolbox [13], and the invariant set computations using
the Matlab Invariant Set Toolbox [14]. The reachability com-
putations for Xk+j|kwere performed on the linear dynamics
and mapped back to x-space. The RMPC optimizations were
formulated in CVX [15] and solved by Gurobi [16].
A. Single link flexible joint manipulator
We consider the single link flexible manipulator system S,
also used in [8] and [17] whose dynamics are given by:
This models a system where a motor, with an angular
moment of inertia J= 1, is coupled to a uniform thin bar of
mass m= 1/g, length l= 1mand moment of inertia I= 1,
through a flexible torsional string with stiffness σ= 1 and
g= 9.8ms2. States x1and x2are the angles of the bar
and motor shaft in radians, respectively, and x3, x4are their
respective rotational speeds in radians/sec. The safe set is the
box X= [π/4, π/4] ×[π/4, π/4] ×[π , π]×[π, π].
The input torque uis bounded in U= [u, u]=[10,10]N·
m. The estimation error e= ˆxxis bounded in E=
[π/180, π/180]4R4and W= [103,103]4R4.
The diffeomorphism Tis given by:
z=T(x) =
The input to the feedback linearized system is given by
v=βu +α(x)where β=σ
IJ and α(x) = mgl
2sin(x1) +
IJ (x1x3)(mg l
Isin(x1)+ σ
The feedback linearized system Sfl has the dynamics: ˙z1=
Fig. 3. The states and their estimates of the feedback linearized and non-
linear manipulator. Note z1=x1and z2=x2. Color in online version.
A global inner approximation of the vinput set is
computed, via interval arithmetic, as Vinnerglobal =
[maxxXα(x) + βu, minxXα(x) + βu]. Similarly, the
inner approximations Vk+j|kare computed online by in-
terval arithmetic as Vk+j|k= [maxxXk+j|kα(x) +
βu, minxXk+j|kα(x) + βu]. Using the procedure in the
appendix [9] the set of safe states for Sfl is given by Z=
Also X0= [0.4655,0.4655]2×[2.7598,2.7598] ×
[2.793,2.793]. Comparing it to the set X, it shows that
we can stabilize the system starting from initial states in a
significantly large region in X.
We applied our controller to the above system with a
discretization rate of 10Hz and MPC horizon N= 10. Fig.3
show the states of the feedback linearized system Sfl . They
converge to the origin in the presence of estimation error,
while respecting all constraints. Fig. 3 also shows x3and x4:
they also converge to zero. Fig. 4 shows the input vto Sfl
along with the global inner approximation Vinnerglobal and
the x-dependent inner approximations at the instant when
the control is applied, Vk|kcomputed online. Note that
the bounds computed online allow for significantly more
control action compared to the conservative global inner
approximation. Finally, Fig. 4 also shows the input uapplied
to the non-linear system (and its bounds), which robustly
respects its constraints uU.
In this paper we develop the first algorithm for robust
control of a non-linear system with estimation errors and
state and input constraints via feedback linearization and
Robust MPC. Experimental results show that the control
algorithm stabilizes the systems while ensuring robust con-
straint satisfaction.While we only evaluated our approach for
single input systems, the formulation and set computations
involved hold as is for multi-input systems as well.
Limitations of the approach mostly have to do with the
numerical limitations involved in computing the constraint
sets, and potential conservatism of the approximations.
Fig. 4. Inputs vand uand their bounds for the manipulator example.
Color in online version.
A. Constraints of successive MPC problems
We are now ready to state and prove a key lemma
regarding the evolution of the state, error and input sets
between MPC optimization problems. This lemma will be
key to proving recursive feasibility of the MPC controller,
since it allows us to show that the constraint sets of one
problem, at time k, are appropriate supersets of the constraint
sets of the next problem, at time k+ 1.
Lemma 4: Let Xk+j|kbe the j-step outer-approximate
reach set computed at time kby a reachability tool as
described in Sec. IV-A.
Let c
Wk+j|kbe the set defined in (18).
Let e
Ek+j|kbe the error set computed using (16), (17) by
substituting Ee
Let Vk+j|k=V(Xk+j|k)and Vk+j|k=V(Xk+j|k)
Then the following hold for all k0, , j 1:
1) Xk+1+j|k+1 Xk+j+1|k
2) e
Ek+1+j|k+1 e
3) c
Wk+1+j|k+1 c
4) Vk+1+j|k+1 Vk+j+1|k
5) Vk+1+j|k+1 Vk+j+1|k(note the change in inclusion
1) Fix an arbitrary k. We prove this by induction on j1.
Base case: j= 1. By construction, ˆxk+1
RTT(Xk|k, U )E. Therefore at time k+ 1, when
setting up the problem Pk+1(ˆzk+1 ), the algorithm
will first compute Xk+1|k+1 = ˆxk+1 (E)
RTT(Xk|k, U )E(E) = Xk+1|k. Also
Xk+2|k+1 =RTT(Xk+1|k+1, U )E(E)
RTT(Xk+1|k, U )E(E) = Xk+2|k.
Induction step: j > 1. By definition, Xk+1+j|k+1 =
RTT(Xk+1+j1|k+1, U )E(E)RTT(Xk+j|k, U )
E(E)(by the induction hypothesis). This last set equals
Xk+j+1|kby definition.
2) By 1) we have that minxXk+j+1|k,eEMi`(x)e()
minxXk+1+j|k+1,eEMi` (x)e()and
that maxxXk+j+1|k,eEMi`(x)e()
maxxXk+1+j|k+1,eEMi` (x)e()which yields the desired
3) This is immediate from the definition (18) and 2).
4) and 5) These are immediate from (14).
B. Proof of Theorem 1
We will prove the Theorem by recursion by showing that if
at time step k, the problem Pkzk)is feasible and the feasible
control input vk=v
k|kis applied, then vkis admissible
(meets the system constraints) and at time k+ 1,zk+1 is
inside Zand also Pk+1(ˆzk+1 )is feasible for all disturbances.
By recursion then, if we have feasibility at step k=k0, we
have robust constraint satisfaction and feasibility at time step
k0+ 1 and so on for all k > k0.
To begin, let Pkzk)be feasible, then it has a feasible
solution ({z
j=0 ,{v
j=0)that satisfies all the
constraints of the Robust MPC. Now let’s construct a feasible
candidate solution for Pk+1(ˆzk+1 )at the next time step by
shifting the above solution one-step forward. Consider the
candidate solution:
¯zk+j+1|k+1 =z
k+j+1|k+Ljˆwk+1,j[0 : N](20a)
¯zk+N+2|k+1 =A¯zk+N+1|k+1 +B¯vk+N+1|k+1 (20b)
¯vk+j+1|k+1 =v
k+j+1|k+KLjˆwk+1 ,j[0 : N1]
¯vk+N+1|k+1 =K¯zk+N+1|k+1 (20d)
First we will show that the input and state constraints are
satisfied by vkand ¯zk+1, then prove feasibility of the above
candidate solution for Pk+1(ˆzk+1 ).
Validity of input and next state: The next state is:
zk+1 =Azk+Bvk+wk
=Azk˜ek) + Bv
k|k˜ek+1 + (wk+ ˜ek+1 A˜ek)
k|k˜ek+1 +bwk+1
k+1|k˜ek+1 + ˆwk+1 (21)
By feasibility of the solution at time k,
Therefore, zk+1 Zand so xk+1 X.
Moreover, by the feasibility of v
k|kfor Pkzk)and by the
definition of Vk|k,vk=v
k|kVk|k, which implies that
Hence, if Pkzk)is feasible, then the applied input at time
step kand the resulting next state zk+1 (and hence xk+1) are
admissible under all possible disturbances. The next part of
the proof will focus on showing that the candidate solution
of Eq. (20a) is indeed feasible for Pk+1(ˆzk+1 )by proving
that it meets all the constraints.
Initial Condition: Recall from (5) that ˆzk+1 =Aˆzk+
Bvk+ ˆwk+1 . Also by the construction of the candidate
¯zk+1|k+1 =z
k|k+ ˆwk+1 (22a)
Since z
k|k= ˆzkand v
k|k=vk, by the two equations
above, we have
¯zk+1|k+1 = ˆzk+1 (23)
Hence, the candidate solution does indeed satisfy the
initial condition for Pk+1(ˆzk+1 ). Next we show that the
candidate solution satisfies the nominal dynamics:
Nominal Dynamics: For 0j < N ,we have:
k+j+2|k+Lj+1 ˆwk+1
k+j+1 +Bv
k+j+1|k+Lj+1 ˆwk+1
By the construction of the candidate solution
=Azk+j+1|k+1 Ljˆwk+1 ) + Bvk+j+1|k+1 KLjˆwk+1 )
+Lj+1 ˆwk+1
=A¯zk+j+1|k+1 +B¯vk+j+1|k+1 (A+BK)Ljˆwk+1
+Lj+1 ˆwk+1
=A¯zk+j+1|k+1 +B¯vk+j+1|k+1 Lj+1 ˆwk+1 +Lj+1 ˆwk+1
=A¯zk+j+1|k+1 +B¯vk+j+1|k+1
For j=N, by construction ¯zk+N+2|k+1 =
A¯zk+N+1|k+1 +B¯vk+N+1|k+1. Hence, the candidate solu-
tion does indeed satisfy the nominal dynamics.
State Constraints: To show feasibility of the candidate
solution w.r.t the state constraints, we need to show that
¯z(k+1)+j|k+1 Zk+1+j|k+1 j= 0, . . . , N . Re-writing Eq.8
for Pkzk)for j= 0, . . . , N 1, we have:
i=0 Lic
i=1 Lic
i=0 Lic
Also, let us write the state constraints for all j= 0, . . . , N
for the problem at time k+ 1, i.e. for Pk+1(ˆzk+1 ):
Z(k+1)+j|k+1 =Zj1
i=0 Lic
Wk+ji|k+1 (˜
Remember, by construction of the candidate, we have
¯zk+j+1|k+1 =z
k+j+1|k+Ljˆwk+1. Also by feasibility of
the algorithm at time k, we have z
k+j+1|kZk+j+1|k, and
by definition, Ljˆwk+1 Ljc
Wk+1|k. Therefore by Eq. (24),
we have j= 0, . . . , N 1,
¯z(k+1)+j|k+1 Zj1
i=0 Lic
Using points 2) and 3) from Lemma 4,
i=0 Lic
i=0 Lic
Wk+ji|k+1 (˜
And using Eq. (VI-B), this implies for all j= 0, . . . , N 1
¯z(k+1)+j|k+1 Zk+1+j|k+1
Now for j=N,¯zk+N+1|k+1 =z
From the terminal constraint we have [z
Wmax. Since wk+1 c
Wmax, and by the
construction of the candidate solution
zk+N+1|k+1 ¯vk+N|k+1]Cp(26)
Remember, by definition of the invariant set, Cp
Emax), and since by definition of ˜
Emax and
Eq. 8, we have PN(˜
Emax)Zk+1+N|k+1 ×
Vk+1+N|k+1, or CpZk+1+N|k+1 ×Vk+1+N|k+1 . This
implies that ¯zk+N+1|k+1 Zk+1+N|k+1 and additionally,
vk+N|k+1 Vk+1+N|k+1. Therefore, the set constraints are
met by candidate solution j= 0, . . . , N .
Input Constraints: For the inputs, we show that the can-
didate solution, ¯vk+j+1|k+1, j = 0, . . . , N 2, satisfies the
input constraints for Pk+1(ˆzk+1 )by using a similar argument
as that used for the state constraints. Let us re-write the input
constraints for Pkzk)for j= 0, . . . , N 2,
i=0 KLic
i=1 KLic
i=0 KLic
Let us also re-write the input constraints for Pk+1(ˆzk+1 )
for j= 0, . . . , N 1,
Vk+1+j|k+1 =Vk+j+1|k+1 j1
i=0 KLic
Wk+ji|k+1 (28)
By construction of the candidate, we have ¯vk+1+j|k+1 =
k+j+1|k+KLjˆwk+1 . Also by feasibility of the algorithm
at time k, we have v
k+j+1|kVk+j+1|k, and by definition,
Ljˆwk+1 Ljc
Wk+1|k. Therefore by definition of the Pon-
traygin difference and Eq. 27, we have j= 1, . . . , N 1,
¯v(k+1)+j|k+1 Vk+j+1|kj1
i=0 Lic
Using points 3) and 4) from Lemma ??
i=0 Lic
Vk+j+1|k+1 j1
i=0 Lic
Wk+j1|k+1 (29b)
And using Eq. 28, this implies
¯v(k+1)+j|k+1 Vk+1+j|k+1 (29c)
Note, for j=N1, we have already shown in the proof
for the state constraints that by definition of the invariant
set C,vk+N|k+1 Vk+1+N1|k+1 by respecting an even
tighter constraint. For the last input for j=N, we have
¯vk+1+N|k+1 =K¯zk+N+1|k, we show that it is inside the
(joint) terminal constraint Pf, and hence is feasible.
Terminal Constraints: Finally, we need to show that
zk+N+2 ¯vk+N+1]0Pf. This can be shown using the
construction of the terminal set and the candidate solution.
From Equation 20a, we have:
¯zk+N+2|k+1 =A¯zk+N+1|k+1 +B¯vk+N+1|k(30a)
¯vk+N+1|k+1 =K¯zk+N+1|k+1 (30b)
Concatenate these two into pk+N+2|k+1 =
zk+N+2|k+1 ¯vk+N+1|k+1]0. Also pk+N+1|k+1 =
zk+N+1 ¯vk+N]Twas in Cpas shown previously (Eq. 26).
Therefore, by definition of the invariant set Cp(Equation
11), we have that pk+N+2|k+1 +ˆ
F wk+1|kCp
for all wk+1|kc
Wmax. Therefore
pk+N+2|k+1 Cpˆ
Wmax =Pf. Therefore the
terminal constraint is also met.
With this, we have the proof for Theorem 1 as we have
shown that feasible solution at time step kfor Pkzk)implies
that the applied input vkis feasible, the next state zk+1 Z
and the problem Pk+1(ˆzk+1 )is feasible at time k+ 1, and
hence Pk+2(ˆzk+2 )is feasible for time step k+ 2 and so on.
C. Proof of Thm. 2
Let Tbe the diffeomorphism mapping xto zfrom
feedback linearization, and set ze=T(xe). Since xeis an
equilibrium point, ze= 0. Recall that Qand Qfof (7) are
positive semi-definite and that Ris positive definite, so that
the optimal cost Jzk)is a positive definite function of
¯zk, and that the terminal weight in (7) is equivalent to the
infinite horizon cost (by our choice of Qf). Finally Thm. 1
guarantees that the tail of the input sequence computed at k
is admissible at time k+ 1. Therefore it is a standard result
that the optimal cost Jzk)is non-increasing in kand that
0is a stable equilibrium for the closed-loop linear system
(e.g., see [11] ). Moreover, the terminal set Pfis a robust
invariant set of the zdynamics containing 0 (see Section III-
A). Therefore Algorithm 1 stabilizes the nominal state ¯zto
Pffrom anywhere in Z0, and the true (linearized) state zto
an invariant set Zinv around 0, and the nonlinear state xto
the invariant set Xinv =T1(Zinv). Therefore Algorithm 1
drives xto Xinv from anywhere in X0T1(Z).
D. Transforming between x-space and z-space
Since we control the system in z-space, we need to
compute a set ZRnzs.t. zZ=x=T1(z)X,
i.e. ZT(X). Thus keeping the state zof the linearized
dynamics in Zimplies the nonlinear system’s state xremains
in X. Moreover, to check feasibility at time 0 of the MPC
optimization, and for stability of the nonlinear dynamics,
we need a subset X0Xs.t. xX0=z=
T(x)Z, i.e. X0T1(Z). Because Tcan be an arbitrary
diffeomorphism Zand X0have to computed numerically.
1) Let Z1Rnzbe the rectangle with bounds in the
ith dimension [minxXTi(x),maxxXTi(x)],i=
1, . . . , nx. This over-approximates T(X). Next we
need to prune it so it under-approximates T(X).
2) Define zin := min{kzk0|zZ1, T 1(z)/X}.zin
is the smallest-norm inadmissible zin Z1. Thus all
points in the 0-ball of radius kzink,Bz(0,kzin k), are
admissible, i.e. their pre-images via T1are in X.
Fig. 5. The error sets ˜
Emax and ˜
Ecomputed over an arbitrary Xk+j|k.
Also shown are realizations of ˜e:= Tx)T(x)for randomly chosen
x X. Color in online version.
3) Let Rzbe the largest inscribed rectangle in
Bz(0,kzink). Now we need to get the x-set that maps
to Rz(or a subset of it).
4) Let X1Xbe the rectangle with bounds in the ith di-
mension [minzRzT1
i(z)]. Again,
this is an over-approximation of T1(Rz), so it needs
to be pruned.
5) Define xin = inf{kxk0|xX1, T(x)/Rz}. Then
every point in the 0-ball Bx(0,kxink)Xmaps via
Tto Rz
Therefore we choose Z=Rzand X0to be the largest
inscribed rectangle in Bx(0,kxink).
E. Error sets
For the running example, Fig. 5 shows the set e
Emax and
Ek+j|kcomputed by Eqs. (17) and (??). for an arbitrary
Xk+j|k= [π/4,0] ×[0.9666,0.6283]. It also shows
1000 randomly generated values for Tx)x(for randomly
generated eEand xXk+j|k), and all fall inside ˜
F. Experiments with the Running example
For the running example of Eq. 6, we discretize the feed-
back linearized system at 10Hz and formulate the controller
with a horizon of N= 15 steps. The cost function has
parameters Q=Iand R= 102, and W= [102,102]2.
The state trajectories (and estimates) for the nonlinear and
linearized systems are shown in Fig. 6. Note that the states
converge to the equilibrium 0. The input uis shown in Fig.
7, and it can be noted that ukUfor all k.
[1] M. Cannon, “Efficient nonlinear model predictive control algorithms,”
Annual Reviews in Control, vol. 28, no. 2, pp. 229 – 237, 2004.
[2] H. Khalil, “Nonlinear systems,” in Prentice Hall, 2002.
[3] D. Simon, J. Lofberg, and T. Glad, “Nonlinear model predictive
control using feedback linearization and local inner convex constraint
approximations,” in Control Conference (ECC), 2013 European, July
2013, pp. 2056–2061.
Fig. 6. The states and their estimates of the feedback linearized and non-
linear running example. Recall that z1=x1therefore to reduce clutter, we
only plot the first state only for the feedback linearized system. Color in
online version.
Fig. 7. Inputs vand uand their bounds for the running example. Color
in online version.
[4] A. Richards and J. How, “Robust model predictive control with imper-
fect information,” in American Control Conference, 2005. Proceedings
of the 2005, June 2005, pp. 268–273.
[5] D. Mayne and E. Kerrigan, “Tube-based Robust Nonlinear Model Pre-
dictive Control ,” in IFAC Symposium on Nonlinear Control Systems,
[6] S. Streif, M. Kogel, T. Bathge, and R. Findeisen, “Robust Nonlinear
Model Predictive Control with Constraint Satisfaction: A Relaxation-
based Approach,” in IFAC World Congress, 2014.
[7] W. Zhao and T. H. Go, “Quadcopter formation flight control combining
mpc and robust feedback linearization,” Journal of the Franklin
Institute, vol. 351, no. 3, pp. 1335 – 1355, 2014.
[8] W. Son, J. Choi, and O. Kwon, “Robust control of feedback lineariz-
able system with the parameter uncertainty and input constraint,” in
Proceedings of the 40th SICE Annual Conference, 2001.
[9] Y. V. Pant, H. Abbas, and R. Mangharam, “Tech report: Robust
model predictive control for non-linear systems with input and
state constraints via feedback linearization,” March 2016. [Online].
[10] Y. V. Pant, K. Mohta, H. Abbas, T. X. Nghiem, J. Devietti, and
R. Mangharam, “Co-design of anytime computation and robust con-
trol,” in RTSS, Dec 2015, pp. 43–52.
[11] B. Kouvaritakis and M. Cannon, Model Predictive Control: Classical,
Robust and Stochastic. Springer Verlag, 2015.
[12] T. T. Johnson, S. Bak, M. Caccamo, and L. Sha, “Real-time
reachability for verified simplex design,ACM Trans. Embed.
Comput. Syst., vol. 15, no. 2, pp. 26:1–26:27, Feb. 2016. [Online].
[13] M. Herceg, M. Kvasnica, C. Jones, and M. Morari, “Multi-Parametric
Toolbox 3.0,” in Proc. of the ECC, Z ¨
urich, Switzerland, July 17–19
2013, pp. 502–510,
[14] E. Kerrigan, “Matlab invariant set toolbox version 0.10.5,” 2016.
[Online]. Available: {
[15] M. Grant and S. Boyd, “CVX: Matlab software for disciplined convex
programming, version 2.0 beta,”, Sep. 2013.
[16] I. Gurobi Optimization, “Gurobi optimizer reference manual,” 2015.
[Online]. Available:
[17] M. Seidi, M. Hajiaghamemar, and B. Segee, “Fuzzy Control Systems:
LMI-Based Design,” Fuzzy Controllers- Recent Advances in Theory
and Applications, InTech, 2012.
... Let W k+j|k be the set defined in (18). Let E k+j|k be the error set computed using (16), (17) by substituting E ← E k|k . ...
... For the running example, Fig. 5 shows the set E max and E k+j|k computed by Eqs. (17) and the definition of E max for an arbitrary X k+j|k = [−π/4, 0] × [−0.9666, −0.6283]. It also shows 1000 randomly generated values for T (x) − x (for randomly generated e ∈ E and x ∈ X k+j|k ), and all fall insideẼ k+j|k . ...
Full-text available
Robust predictive control of non-linear systems under state estimation errors and input and state constraints is a challenging problem, and solutions to it have generally involved solving computationally hard non-linear optimizations. Feedback linearization has reduced the computational burden, but has not yet been solved for robust model predictive control under estimation errors and constraints. In this paper, we solve this problem of robust control of a non-linear system under bounded state estimation errors and input and state constraints using feedback linearization. We do so by developing robust constraints on the feedback linearized system such that the non-linear system respects its constraints. These constraints are computed at run-time using online reachability, and are linear in the optimization variables, resulting in a Quadratic Program with linear constraints. We also provide robust feasibility, recursive feasibility and stability results for our control algorithm. We evaluate our approach on two systems to show its applicability and performance
... The proof is in the online report Pant et al. [2016]. ...
Safe autonomous operation of dynamical systems has become one of the most important research problems. Algorithms for planning and control of such systems are now finding place on production vehicles, and are fast becoming ubiquitous on the roads and air-spaces. However most algorithms for such operations, that provide guarantees, either do not scale well or rely on over-simplifying abstractions that make them impractical for real world implementations. On the other hand, the algorithms that are computationally tractable and amenable to implementation generally lack any guarantees on their behavior. In this work, we aim to bridge the gap between provable and scalable planning and control for dynamical systems. The research covered herein can be broadly categorized into: i) multi-agent planning with temporal logic specifications, and ii) robust predictive control that takes into account the performance of the perception algorithms used to process information for control. In the first part, we focus on multi-robot systems with complicated mission requirements, and develop a planning algorithm that can take into account a) spatial, b) temporal and c) reactive mission requirements across multiple robots. The algorithm not only guarantees continuous time satisfaction of the mission requirements, but also that the generated trajectories can be followed by the robot. The other part develops a robust, predictive control algorithm to control the the dynamical system to follow the trajectories generated by the first part, within some desired bounds. This relies on a contract-based framework wherein the control algorithm controls the dynamical system as well as a resource/quality trade-off in a perception-based state estimation algorithm. We show that this predictive algorithm remains feasible with respect to constraints while following a desired trajectory, and also stabilizes the dynamical system under control. Through simulations, as well as experiments on actual robotic systems, we show that the planning method is computationally efficient as well as scales better than other state-of-the art algorithms that use similar formal specification. We also show that the robust control algorithm provides better control performance, and is also computationally more efficient than similar algorithms that do not leverage the resource/quality trade-off of the perception-based state estimator
... In the recent paper by Pant et al. [2016] this method has been extended to also address the case when the states of the nonlinear system are available only through a state estimate and when the original states are not preserved in the feedback linearization. In [Deori et al., 2015] the authors apply a similar type of algorithm to the air traffic management problem. ...
Full-text available
Flight control design for modern fighter aircraft is a challenging task. Aircraft are dynamical systems, which naturally contain a variety of constraints and nonlinearities such as, e.g., maximum permissible load factor, angle of attack and control surface deflections. Taking these limitations into account in the design of control systems is becoming increasingly important as the performance and complexity of the aircraft is constantly increasing. The aeronautical industry has traditionally applied feedforward, anti-windup or similar techniques and different ad hoc engineering solutions to handle constraints on the aircraft. However these approaches often rely on engineering experience and insight rather than a theoretical foundation, and can often require a tremendous amount of time to tune. In this thesis we investigate model predictive control as an alternative design tool to handle the constraints that arises in the flight control design. We derive a simple reference tracking MPC algorithm for linear systems that build on the dual mode formulation with guaranteed stability and low complexity suitable for implementation in real time safety critical systems. To reduce the computational burden of nonlinear model predictive control we propose a method to handle the nonlinear constraints, using a set of dynamically generated local inner polytopic approximations. The main benefit of the proposed method is that while computationally cheap it still can guarantee recursive feasibility and convergence. An alternative to deriving MPC algorithms with guaranteed stability properties is to analyze the closed loop stability, post design. Here we focus on deriving a tool based on Mixed Integer Linear Programming for analysis of the closed loop stability and robust stability of linear systems controlled with MPC controllers. To test the performance of model predictive control for a real world example we design and implement a standard MPC controller in the development simulator for the JAS 39 Gripen aircraft at Saab Aeronautics. This part of the thesis focuses on practical and tuning aspects of designing MPC controllers for fighter aircraft. Finally we have compared the MPC design with an alternative approach to maneuver limiting using a command governor.
In this technical note, an adaptive model predictive control (MPC) is proposed for a class of discrete-time linear systems with constant parametric uncertainties and control constraint. The proposed adaptive MPC originates from the principle of min-max optimization which cannot be solved in a direct numerical way. An adaptive strategy is proposed to estimate the uncertain parameters, such that the estimated error converges, and the optimization in MPC can be transferred into a solvable simple structure. Feasibility of the optimization and stability of the closed-loop system are proved theoretically, and a simulation example is presented to illustrate the theoretical result.
Full-text available
Robust predictive control of non-linear systems under state estimation errors and input and state constraints is a challenging problem, and solutions to it have generally involved solving computationally hard non-linear optimizations. Feedback linearization has reduced the computational burden, but has not yet been solved for robust model predictive control under estimation errors and constraints. In this paper, we solve this problem of robust control of a non-linear system under bounded state estimation errors and input and state constraints using feedback linearization. We do so by developing robust constraints on the feedback linearized system such that the non-linear system respects its constraints. These constraints are computed at run-time using online reachability, and are linear in the optimization variables, resulting in a Quadratic Program with linear constraints. We also provide robust feasibility, recursive feasibility and stability results for our control algorithm. We evaluate our approach on two systems to show its applicability and performance
Full-text available
The Simplex architecture ensures the safe use of an unverifiable complex/smart controller by using it in conjunction with a verified safety controller and verified supervisory controller (switching logic). This architecture enables the safe use of smart, high-performance, untrusted, and complex control algorithms to enable autonomy without requiring the smart controllers to be formally verified or certified. Simplex incorporates a supervisory controller that will take over control from the unverified complex/smart controller if it misbehaves and use a safety controller. The supervisory controller should (1) guarantee that the system never enters an unsafe state (safety), but should also (2) use the complex/smart controller asmuch as possible (minimize conservatism). The problem of precisely and correctly defining the switching logic of the supervisory controller has previously been considered either using a control-theoretic optimization approach or through an offline hybrid-systems reachability computation. In this work, we show that a combined online/offline approach that uses aspects of the two earlier methods, along with a real-time reachability computation, also maintains safety, but with significantly less conservatism, allowing the complex controller to be used more frequently.We demonstrate the advantages of this unified approach on a saturated inverted pendulum system, inwhich the verifiable region of attraction is over twice as large compared to the earlier approach. Additionally, to validate the claims that the real-time reachability approach may be implemented on embedded platforms, we have ported and conducted embedded hardware studies using both ARM processors and Atmel AVR microcontrollers. This is the first ever demonstration of a hybrid-systems reachability computation in real time on actual embedded platforms, which required addressing significant technical challenges.
Conference Paper
Full-text available
Control software of autonomous robots has stringent real-time requirements that must be met to achieve the control objectives. One source of variability in the performance of a control system is the execution time and accuracy of the state estimator that provides the controller with state information. This estimator is typically perception-based (e.g., Computer Vision-based) and is computationally expensive. When the computational resources of the hardware platform become overloaded, the estimation delay can compromise control performance and even stability. In this paper, we define a framework for co-designing anytime estimation and control algorithms, in a manner that accounts for implementation issues like delays and inaccuracies. We construct an anytime perception-based estimator from standard off-the-shelf Computer Vision algorithms, and show how to obtain a trade-off curve for its delay vs estimate error behavior. We use this anytime estimator in a controller that can use this trade-off curve at runtime to achieve its control objectives at a reduced energy cost. When the estimation delay is too large for correct operation, we provide an optimal manner in which the controller can use this curve to reduce estimation delay at the cost of higher inaccuracy, all the while guaranteeing basic objectives are met. We illustrate our approach on an autonomous hexrotor and demonstrate its advantage over a system that does not exploit co-design.
Full-text available
Fuzzy control theory is an emerging area of research. At the core of many engineering problems is the problem of control of different systems. These systems range all the way from classical inverted pendulum to auto-focusing system of a digital camera. Fuzzy control systems have demonstrated their enhanced performance in all these areas. Progress in this domain is very fast and there was critical need of a book that captures all the recent advances both in theory and in applications. Serving this purpose, this book is conceived. This book will provide you a very clear picture of current status of fuzzy control research. This book is intended for researchers, engineers, and postgraduate students specializing in fuzzy systems, control engineering, and robotics.
Conference Paper
Full-text available
The Multi-Parametric Toolbox is a collection of algorithms for modeling, control, analysis, and deployment of constrained optimal controllers developed under Matlab. It features a powerful geometric library that extends the application of the toolbox beyond optimal control to various problems arising in computational geometry. The new version 3.0 is a complete rewrite of the original toolbox with a more flexible structure that offers faster integration of new algorithms. The numerical side of the toolbox has been improved by adding interfaces to state of the art solvers and by incorporation of a new parametric solver that relies on solving linear-complementarity problems. The toolbox provides algorithms for design and implementation of real-time model predictive controllers that have been extensively tested.
Conference Paper
Full-text available
Model predictive control (MPC) is one of the most popular advanced control techniques and is used widely in industry. The main drawback with MPC is that it is fairly computationally expensive and this has so far limited its practical use for nonlinear systems. To reduce the computational burden of nonlinear MPC, Feedback Linearization together with linear MPC has been used successfully to control nonlinear systems. The main drawback is that this results in an optimization problem with nonlinear constraints on the control signal. In this paper we propose a method to handle the nonlinear constraints that arises using a set of dynamically generated local inner polytopic approximations. The main benefits of the proposed method is guaranteed recursive feasibility and convergence.
For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplicative and stochastic model uncertainty. The book provides: • extensive use of illustrative examples; • sample problems; and • discussion of novel control applications such as resource allocation for sustainable development and turbine-blade control for maximized power capture with simultaneously reduced risk of turbulence-induced damage. Graduate students pursuing courses in model predictive control or more generally in advanced or process control and senior undergraduates in need of a specialized treatment will find Model Predictive Control an invaluable guide to the state of the art in this important subject. For the instructor it provides an authoritative resource for the construction of courses.
Conference Paper
A nonlinear model predictive control scheme guaranteeing robust constraint satisfaction is presented. The scheme is applicable to polynomial or rational systems and guarantees that state, terminal, and output constraints are robustly satisfied despite uncertain and bounded disturbances, parameters, and state measurements or estimates. In addition, for a suitably chosen terminal set, feasibility of the underlying optimization problem at a time instance guarantees that the constraints are robustly satisfied for all future time instances. The proposed scheme utilizes a semi-infinite optimization problem reformulated as a bilevel optimization problem: The outer program determines an input minimizing a performance index for a nominal nonlinear system, while several inner programs certify robust constraint satisfaction. We use convex relaxations to deal with the nonlinear dynamics in the inner programs efficiently. A simulation example is presented to demonstrate the approach.
This paper presents an integrated and practical control strategy to solve the leader–follower quadcopter formation flight control problem. To be specific, this control strategy is designed for the follower quadcopter to keep the specified formation shape and avoid the obstacles during flight. The proposed control scheme uses a hierarchical approach consisting of model predictive controller (MPC) in the upper layer with a robust feedback linearization controller in the bottom layer. The MPC controller generates the optimized collision-free state reference trajectory which satisfies all relevant constraints and robust to the input disturbances, while the robust feedback linearization controller tracks the optimal state reference and suppresses any tracking errors during the MPC update interval. In the top-layer MPC, two modifications, i.e. the control input hold and variable prediction horizon, are made and combined to allow for the practical online formation flight implementation. Furthermore, the existing MPC obstacle avoidance scheme has been extended to account for small non-apriorily known obstacles. The whole system is proved to be stable, computationally feasible and able to reach the desired formation configuration in finite time. Formation flight experiments are set up in Vicon motion-capture environment and the flight results demonstrate the effectiveness of the proposed formation flight architecture.
This paper extends tube-based model predictive control of linear systems to achieve robust control of nonlinear systems subject to additive disturbances. A central or reference trajectory is determined by solving a nominal optimal control problem. The local linear controller, employed in tube-based robust control of linear systems, is replaced by an ancillary model predictive controller that forces the trajectories of the disturbed system to lie in a tube whose center is the reference trajectory thereby enabling robust control of uncertain nonlinear systems to be achieved. Copyright © 2011 John Wiley & Sons, Ltd.