Content uploaded by Eka Suwartadi

Author content

All content in this area was uploaded by Eka Suwartadi on May 02, 2018

Content may be subject to copyright.

Article

Sensitivity-Based Economic NMPC with

a Path-Following Approach

Eka Suwartadi 1, Vyacheslav Kungurtsev 2and Johannes Jäschke 1,*

1Department of Chemical Engineering, Norwegian University of Science and Technology (NTNU),

7491 Trondheim, Norway; eka.suwartadi@ntnu.no

2Department of Computer Science, Czech Technical University in Prague, 12000 Praha 2, Czech Republic;

vyacheslav.kungurtsev@fel.cvut.cz

*Correspondence: jaschke@ntnu.no; Tel.: +47-735-93691

Academic Editor: Dominique Bonvin

Received: 26 November 2016; Accepted: 13 February 2017; Published: 27 February 2017

Abstract:

We present a sensitivity-based predictor-corrector path-following algorithm for fast

nonlinear model predictive control (NMPC) and demonstrate it on a large case study with an economic

cost function. The path-following method is applied within the advanced-step NMPC framework to

obtain fast and accurate approximate solutions of the NMPC problem. In our approach, we solve

a sequence of quadratic programs to trace the optimal NMPC solution along a parameter change.

A distinguishing feature of the path-following algorithm in this paper is that the strongly-active

inequality constraints are included as equality constraints in the quadratic programs, while the

weakly-active constraints are left as inequalities. This leads to close tracking of the optimal

solution. The approach is applied to an economic NMPC case study consisting of a process with

a reactor, a distillation column and a recycler. We compare the path-following NMPC solution with

an ideal NMPC solution, which is obtained by solving the full nonlinear programming problem.

Our simulations show that the proposed algorithm effectively traces the exact solution.

Keywords:

fast economic NMPC; NLP sensitivity; path-following algorithm; nonlinear programming;

dynamic optimization

1. Introduction

The idea of economic model predictive control (MPC) is to integrate the economic optimization

layer and the control layer in the process control hierarchy into a single dynamic optimization layer.

While classic model predictive control approaches typically employ a quadratic objective to minimize

the error between the setpoints and selected measurements, economic MPC adjusts the inputs to

minimize the economic cost of operation directly. This makes it possible to optimize the cost during

transient operation of the plant. In recent years, this has become increasingly desirable, as stronger

competition, volatile energy prices and rapidly changing product speciﬁcations require agile plant

operations, where also transients are optimized to maximize proﬁt.

The ﬁrst industrial implementations of economic MPC were reported in [

1

,

2

] for oil reﬁnery

applications. The development of theory and stability analysis for economic MPC arose almost

a decade afterwards; see, e.g., [

3

,

4

]. Recent progress on economic MPC is reviewed and surveyed

in [

5

,

6

]. Most of the current research activities focus on the stability analysis of economic MPC and do

not discuss its performance (an exception is [7]).

Because nonlinear process models are often used for economic optimization, a potential drawback

of economic MPC is that it requires solving a large-scale nonlinear optimization problem (NLP)

associated with the nonlinear model predictive control (NMPC) problem at every sample time.

Processes 2017,5, 8; doi:10.3390/pr5010008 www.mdpi.com/journal/processes

Processes 2017,5, 8 2 of 18

The solution of this NLP may take a signiﬁcant amount of time [

8

], and this can lead to performance

degradation and even to instability of the closed-loop system [9].

To reduce the detrimental effect of computational delay in NMPC, several sensitivity-based

methods were proposed [

10

–

12

]. All of these fast sensitivity approaches exploit the fact that the NMPC

optimization problems are identical at each sample time, except for one varying parameter: the initial

state. Instead of solving the full nonlinear optimization problem when new measurements of the state

become available, these approaches use the sensitivity of the NLP solution at a previously-computed

iteration to obtain fast approximate solutions to the new NMPC problem. These approximate

solutions can be computed and implemented in the plant with minimal delay. A recent overview

of the developments in fast sensitivity-based nonlinear MPC is given in [

13

], and a comparison of

different approaches to obtain sensitivity updates for NMPC is compiled in the paper by Wolf and

Marquardt [14].

Diehl et al. [

15

] proposed the concept of real-time iteration (RTI), in which the full NLP is not

solved at all during the MPC iterations. Instead, at each NMPC sampling time, a single quadratic

programming (QP) related to the sequential quadratic programming (SQP) iteration for solving the

full NLP is solved. The real-time iteration scheme contains two phases: (1) the preparation phase and

(2) the feedback phase. In the preparation phase, the model derivatives are evaluated using a predicted

state measurement, and a QP is formulated based on data of this predicted state. In the feedback phase,

once the new initial state is available, the QP is updated to include the new initial state and solved

for the control input that is injected into the plant. The real-time iteration scheme has been applied

to economic NMPC in the context of wind turbine control [

16

,

17

]. Similar to the real-time iteration

scheme are the approaches by Ohtsuka [

18

] and the early paper by Li and Biegler [

19

], where one

single Newton-like iteration is performed per sampling time.

A different approach, the advanced-step NMPC (asNMPC), was proposed by Zavala and

Biegler [

10

]. The asNMPC approach involves solving the full NLP at every sample time. However,

the full NLP solution is computed in advance for a predicted initial state. Once the new state

measurement is available, the NLP solution is corrected using a fast sensitivity update to match

the measured or estimated initial state. A simple sensitivity update scheme is implemented in the

software package sIPOPT [

20

]. However, active set changes are handled rather heuristically; see [

21

] for

an overview. Kadam and Marquardt [

22

] proposed a similar approach, where nominal NLP solutions

are updated by solving QPs in a neighboring extremal scheme; see also [12,23].

The framework of asNMPC was also applied by Jäschke and Biegler [

24

], who use a multiple-step

predictor path-following algorithm to correct the NLP predictions. Their approach included measures

to handle active set changes rigorously, and their path-following advanced-step NMPC algorithm is

also the ﬁrst one to handle non-unique Lagrange multipliers.

The contribution of this paper is to apply an improved path-following method for correcting the

NLP solution within the advanced-step NMPC framework. In particular, we replace the predictor

path-following method from [

24

] by a predictor-corrector method and demonstrate numerically that

the method works efﬁciently on a large-scale case study. We present how the asNMPC with the

predictor-corrector path-following algorithm performs in the presence of measurement noise and

compare it with a pure predictor path-following asNMPC approach and an ideal NMPC approach,

where the NLP is assumed to be solved instantly. We also give a brief discussion about how our

method differs from previously published approaches.

The structure of this paper is the following. We start by introducing the ideal NMPC and

advanced-step NMPC frameworks in Section 2and give a description of our path-following algorithm

together with some relevant background material and a brief discussion in Section 3. The proposed

algorithm is applied to a process with a reactor, distillation and recycling in Section 4, where we

consider the cases with and without measurement noise and discuss the results. The paper is closed

with our conclusions in Section 5.

Processes 2017,5, 8 3 of 18

2. NMPC Problem Formulations

2.1. The NMPC Problem

We consider a nonlinear discrete-time dynamic system:

xk+1=f(xk,uk)(1)

where

xk∈Rnx

denotes the state variable,

uk∈Rnu

is the control input and

f:Rnx×Rnu→Rnx

is a

continuous model function, which calculates the next state

xk+1

from the previous state

xk

and control

input

uk

, where

k∈N

. This system is optimized by a nonlinear model predictive controller, which

solves the problem:

(Pnmpc): min

zl,vl

Ψ(zN) +

N−1

∑

l=0

ψ(zl,vl)(2)

s.t. zl+1=f(zl,vl)l=0, . . . , N−1,

z0=xk,

(zl,vl)∈ Z,l=0, . . . , N−1,

zN∈ Xf,

at each sample time. Here,

zl∈Rnx

is the predicted state variable;

vl∈Rnu

is the predicted control

input; and

zN∈ Xf

is the ﬁnal predicted state variable restricted to the terminal region

Xf∈Rnx

.

The stage cost is denoted by

ψ:Rnx×Rnu→R

and the terminal cost by

Ψ:Xf→R

. Further,

Zdenotes the path constraints, i.e., Z={(z,v)|q(z,v)≤0}, where q:Rnx×Rnu→Rnq.

The solution of the optimization problem

Pnmpc

is denoted

z∗

0, . . . , z∗

N,v∗

0, . . . , v∗

N−1

. At sample

time

k

, an estimate or measurement of the state

xk

is obtained, and problem

Pnmpc

is solved. Then,

the ﬁrst part of the optimal control sequence is assigned as plant input, such that

uk=v∗

0

. This ﬁrst

part of the solution to Pnmpc deﬁnes an implicit feedback law uk=κ(xk), and the system will evolve

according to

xk+1=f(xk,κ(xk))

. At the next sample time

k+

1, when the measurement of the new

state

xk+1

is obtained, the procedure is repeated. The NMPC algorithm is summarized in Algorithm 1.

Algorithm 1: General NMPC algorithm.

1set k←0

2while MPC is running do

31. Measure or estimate xk.

42. Assign the initial state: set z0=xk.

53. Solve the optimization problem Pnm pc to ﬁnd v∗

0.

64. Assign the plant input uk=v∗

0.

75. Inject ukto the plant (1).

86. Set k←k+1

2.2. Ideal NMPC and Advanced-Step NMPC Framework

For achieving optimal economic performance and good stability properties, problem

Pnmpc

needs

to be solved instantly, so that the optimal input can be injected without time delay as soon as the

values of the new states are available. We refer to this hypothetical case without computational delay

as ideal NMPC.

In practice, there will always be some time delay between obtaining the updated values of

the states and injecting the updated inputs into the plant. The main reason for this delay is the

time it requires to solve the optimization problem

Pnmpc

. As the process models become more

Processes 2017,5, 8 4 of 18

advanced, solving the optimization problems requires more time, and the computational delay cannot

be neglected any more. This has led to the development of fast sensitivity-based NMPC approaches.

One such approach that will be a adopted in this paper is the advanced-step NMPC (asNMPC)

approach [10]. It is based on the following steps:

1. Solve the NMPC problem at time kwith a predicted state value of time k+1,

2. When the measurement xk+1becomes available at time k+1, compute an approximation of the

NLP solution using fast sensitivity methods,

3. Update k←k+1, and repeat from Step 1.

Zavala and Biegler proposed a fast one-step sensitivity update that is based on solving a linear

system of equations [

10

]. Under some assumptions, this corresponds to a ﬁrst-order Taylor

approximation of the optimal solution. In particular, this approach requires strict complementarity

of the NLP solution, which ensures no changes in the active set. A more general approach involves

allowing for changes in the active set and making several sensitivity updates. This was proposed

in [24] and will be developed further in this paper.

3. Sensitivity-Based Path-Following NMPC

In this section, we present some fundamental sensitivity results from the literature and then use

them in a path-following scheme for obtaining fast approximate solutions to the NLP.

3.1. Sensitivity Properties of NLP

The dynamic optimization Problem (2) can be cast as a general parametric NLP problem:

(PNL P): min

χF(χ,p)(3)

s.t. c(χ,p)=0

g(χ,p)≤0,

where

χ∈Rnχ

are the decision variables (which generally include the state variables and the control

input

nχ=nx+nu

) and

p∈Rnp

is the parameter, which is typically the initial state variable

xk

.

In addition,

F:Rnχ×Rnp→R

is the scalar objective function;

c:Rnχ×Rnp→Rnc

denotes the

equality constraints; and ﬁnally,

g:Rnχ×Rnp→Rng

denotes the inequality constraints. The instances

of Problem (3) that are solved at each sample time differ only in the parameter p.

The Lagrangian function of this problem is deﬁned as:

L(χ,p,λ,µ)=F(χ,p)+λTc(χ,p)+µTg(χ,p), (4)

and the KKT (Karush–Kuhn–Tucker) conditions are:

c(x,p)=0, g(x,p)≤0(prim al f easibility)(5)

µ≥0, (dua l f easibility)

∇xL(x,p,λ,µ)=0, (stationary condition)

µTg(x,p)=0, (compl ementary slackness).

In order for the KKT conditions to be a necessary condition of optimality, we require a constraint

qualiﬁcation (CQ) to hold. In this paper, we will assume that the linear independence constraint

qualiﬁcation (LICQ) holds:

Deﬁnition 1

(LICQ)

.

Given a vector

p

and a point

χ

, the LICQ holds at

χ

if the set of vectors

n{∇χci(χ,p)}i∈{1,...,nc}∪ {∇χgi(χ,p)}i:gi(χ,p)=0ois linearly independent.

Processes 2017,5, 8 5 of 18

The LICQ implies that the multipliers

(λ

,

µ)

satisfying the KKT conditions are unique.

If additionally, a suitable second-order condition holds, then the KKT conditions guarantee a unique

local minimum. A suitable second-order condition states that the Hessian matrix has to be positive

deﬁnite in a set of appropriate directions, deﬁned in the following property:

Deﬁnition 2

(SSOSC)

.

The strong second-order sufﬁcient condition (SSOSC) holds at

χ

with multipliers

λ

and

µ

if

dT∇2

χL(χ,p,λ,µ)d>

0for all

d6=

0, such that

∇χc(χ,p)Td=

0and

∇χgi(χ,p)Td=

0for

i

,

such that gi(χ,p)=0and µi>0.

For a given

p

, denote the solution to

(3)

by

χ∗(p)

,

λ∗(p)

,

µ∗(p)

, and if no confusion is possible,

we omit the argument and write simply

χ∗

,

λ∗

,

µ∗

. We are interested in knowing how the solution

changes with a perturbation in the parameter

p

. Before we state a ﬁrst sensitivity result, we deﬁne

another important concept:

Deﬁnition 3

(SC)

.

Given a vector

p

and a solution

χ∗

with vectors of multipliers

λ∗

and

µ∗

, strict

complimentary (SC) holds if µ∗

i−gi(χ∗,p)>0for each i =1, . . . , ng.

Now, we are ready to state the result below given by Fiacco [25].

Theorem 1

(Implicit function theorem applied to optimality conditions)

.

Let

χ∗(p)

be a KKT point that

satisﬁes

(5)

, and assume that LICQ, SSOSC and SC hold at

χ∗

. Further, let the function

F

,

c

,

g

be at least

k+1-times differentiable in χand k-times differentiable in p. Then:

•χ∗is an isolated minimizer, and the associated multipliers λand µare unique.

•for pin a neighborhood of p0, the set of active constraints remains unchanged.

•

for

p

in a neighborhood of

p0

, there exists a

k

-times differentiable function

σ(p)=

hχ∗(p)Tλ∗(p)Tµ∗(p)Ti, that corresponds to a locally unique minimum for (3).

Proof. See Fiacco [25].

Using this result, the sensitivity of the optimal solution

χ∗

,

λ∗

,

µ∗

in a small neighborhood of

p0

can be computed by solving a system of linear equations that arises from applying the implicit

function theorem to the KKT conditions of (3):

∇2

χχL(χ∗,p0,λ∗,µ∗)∇χc(χ∗,p0)∇χgA(χ∗,p0)

∇χc(χ∗,p0)T0 0

∇χgA(χ∗,p0)T0 0

∇pχ

∇pλ

∇pµ

=−

∇2

pχL(χ∗,p0,λ∗,µ∗)

∇pc(χ∗,p0)

∇pgA(χ∗,p0)

.(6)

Here, the constraint gradients with subscript

gA

indicate that we only include the vectors and

components of the Jacobian corresponding to the active inequality constraints at

χ

, i.e.,

i∈A

if

gi(χ

,

p0) =

0. Denoting the solution of the equation above as

h∇pχ∇pλ∇pµiT

, for small

∆p

,

we obtain a good estimate:

χ(p0+4p)=χ∗+∇pχ4p, (7)

λ(p0+4p)=λ∗+∇pλ4p, (8)

µ(p0+4p)=µ∗+∇pµ4p, (9)

of the solution to the NLP Problem

(3)

at the parameter value

p0+∆p

. This approach was applied by

Zavala and Biegler [10].

If

∆p

becomes large, the approximate solution may no longer be accurate enough, because the SC

assumption implies that the active set cannot change. While that is usually true for small perturbations,

large changes in ∆pmay very well induce active set changes.

Processes 2017,5, 8 6 of 18

It can be seen that the sensitivity system corresponds to the stationarity conditions for a particular

QP. This is not coincidental. It can be shown that for

∆p

small enough, the set

{i:µ(¯

p)i>

0

}

is constant for

¯

p=p0+∆p

. Thus, we can form a QP wherein we are potentially moving off of

weakly-active constraints while staying on the strongly-active ones. The primal-dual solution of this

QP is in fact the directional derivative of the primal-dual solution path χ∗(p),λ∗(p),µ∗(p).

Theorem 2.

Let

F

,

c

,

g

be twice continuously differentiable in

p

and

χ

near

(χ∗,p0)

, and let the LICQ and

SSOSC hold at

(χ∗,p0)

. Then, the solution

(χ∗(p),λ∗(p),µ∗(p))

is Lipschitz continuous in a neighborhood

of (χ∗,λ∗,µ∗,p0), and the solution function (χ∗(p),λ∗(p),µ∗(p)) is directionally differentiable.

Moreover, the directional derivative uniquely solves the following quadratic problem:

min

4χ

1

24χT∇2

χχL(χ∗,p0,λ∗,µ∗)4χ+4χT∇2

pχL(χ∗,p0,λ∗,µ∗)4p(10)

s.t. ∇χci(χ∗,p0)T4χ+∇pci(χ∗,p0)T4p=0i=1, . . . nc

∇χgj(χ∗,p0)T4χ+∇pgj(χ∗,p0)T4p=0j∈K+

∇χgj(χ∗,p0)T4χ+∇pgj(χ∗,p0)T4p≤0j∈K0,

where

K+=j∈Z:µj>0

is the strongly-active set and

K0=j∈Z:µj=0 and gj(χ∗,p0)=0

denotes the weakly active set.

Proof. See [26] (Sections 5.1 and 5.2) and [27] (Proposition 3.4.1).

The theorem above gives the solution of the perturbed NLP

(3)

by solving a QP problem. Note

that regardless of the inertia of the Lagrangian Hessian, if the SSOSC holds, it is positive deﬁnite on

the null-space of the equality constraints, and thus, the QP deﬁned is convex with an easily obtainable

ﬁnite global minimizer. In [

28

], it is noted that as the solution to this QP is the directional derivative

of the primal-dual solution of the NLP, it is a predictor step, a tangential ﬁrst-order estimate of the

change in the solution subject to a change in the parameter. We refer to the QP

(10)

as a pure-predictor.

Note that obtaining the sensitivity via

(10)

instead of

(6)

has the advantage that changes in the active

set can be accounted for correctly, and strict complementarity (SC) is not required. On the other hand,

when SC does hold, (6) and (10) are equivalent.

3.2. Path-Following Based on Sensitivity Properties

Equation

(6)

and the QP

(10)

describes the change in the optimal solutions for small perturbations.

They cannot be guaranteed to reproduce the optimal solution accurately for larger perturbations,

because of curvature in the solution path and active set changes that happen further away from the

linearization point. One approach to handle such cases is to divide the overall perturbation into several

smaller intervals and to iteratively use the sensitivity to track the path of optimal solutions.

The general idea of a path-following method is to reach the solution of the problem at a ﬁnal

parameter value

pf

by tracing a sequence of solutions

(χk

,

λk

,

µk)

for a series of parameter values

p(tk) = (1−tk)p0+tkpf

with 0

=t0<t1<

...

<tk<

...

<tN=

1. The new direction is found

by evaluating the sensitivity at the current point. This is similar to a Euler integration for ordinary

differential equations.

However, just as in the case of integrating differential equations with a Euler method,

a path-following algorithm that is only based on the sensitivity calculated by the pure predictor

QP may fail to track the solution accurately enough and may lead to poor solutions. To address this

problem, a common approach is to include elements that are similar to a Newton step, which force the

path-following algorithm towards the true solution. It has been found that such a corrector element can

be easily included into a QP that is very similar to the predictor QP

(10)

. Consider approximating

(3)

Processes 2017,5, 8 7 of 18

by a QP, linearizing with respect to both

χ

and

p

, but again enforcing the equality of the strongly-active

constraints, as we expect them to remain strongly active at a perturbed NLP:

min

4χ,4p

1

24χT∇2

χχL(χ∗,p0,λ∗,µ∗)4χ+4χT∇2

pχL(χ∗,p0,λ∗,µ∗)4p+∇χFT4χ+∇pF4p+1

24pT∇2

ppL(χ∗,p0,λ∗,µ∗)4p(11)

s.t. ci(χ∗,p0)+∇χci(χ∗,p0)T4χ+∇pci(χ∗,p0)T4p=0i=0, . . . nc

gj(χ∗,p0)+∇χgj(χ∗,p0)T4χ+∇pgj(χ∗,p0)T4p=0j∈K+

gj(χ∗,p0)+∇χgj(χ∗,p0)T4χ+∇pgj(χ∗,p0)T4p≤0j∈1, ..., ng\K+.

In our NMPC problem

Pnmpc

, the parameter

p

corresponds to the current “initial” state,

xk

.

Moreover, the cost function is independent of

p

, and we have that

∇pF=

0. Since the parameter

enters the constraints linearly, we have that

∇pc

and

∇pg

are constants. With these facts, the above

QP simpliﬁes to:

min

4χ

1

24χT∇2

χχL(χ∗,p0+4p,λ∗,µ∗)4χ+∇χFT4χ(12)

s.t. ci(χ∗,p0+4p)+∇χci(χ∗,p0+4p)T4χ=0i=1, . . . nc

gj(χ∗,p0+4p)+∇χgj(χ∗,p0+4p)T4χ=0j∈K+

gj(χ∗,p0+4p)+∇χgj(χ∗,p0+4p)T4χ≤0j∈1, ..., ng\K+.

We denote the QP formulation

(12)

as the predictor-corrector. We note that this QP is similar to

the QP proposed in the real-time iteration scheme [

15

]. However, it is not quite the same, as we enforce

the strongly-active constraints as equality constraints in the QP. As explained in [

28

], this particular QP

tries to estimate how the NLP solution changes as the parameter does in the predictor component and

reﬁnes the estimate, in more closely satisfying the KKT conditions at the new parameter, as a corrector.

The predictor-corrector QP

(12)

is well suited for use in a path-following algorithm, where the

optimal solution path is tracked from

p0

to a ﬁnal value

pf

along a sequence of parameter points

p(tk) = (1−tk)p0+tkpf

with 0

=t0<t1<

...

<tk<

...

<tN=

1. At each point

p(tk)

, the QP is

solved and the primal-dual solutions updated as:

χ(tk+1) = χ(tk) + ∆χ(13)

λ(tk+1) = ∆λ(14)

µ(tk+1) = ∆µ, (15)

where

∆χ

is obtained from the primal solution of QP

(12)

and where

∆λ

and

∆µ

correspond to the

Lagrange multipliers of QP (12).

Changes in the active set along the path are detected by the QP as follows: If a constraint becomes

inactive at some point along the path, the corresponding multiplier

µj

will ﬁrst become weakly active,

i.e., it will be added to the set

K0

. Since it is not included as an equality constraint, the next QP solution

can move away from the constraint. Similarly, if a new constraint

gj

becomes active along the path,

it will make the corresponding linearized inequality constraint in the QP active and be tracked further

along the path.

The resulting path-following algorithm is summarized with its main steps in Algorithm 2, and we

are now in the position to apply it in the advanced-step NMPC setting described in Section 2.2.

In particular, the path-following algorithm is used to ﬁnd a fast approximation of the optimal NLP

solution corresponding to the new available state measurement, which is done by following the

optimal solution path from the predicted state to the measured state.

Processes 2017,5, 8 8 of 18

Algorithm 2: Path-following algorithm.

Input: initial variables from NLP χ∗(p0),λ∗(p0),µ∗(p0)

ﬁx stepsize 4t, and set N=1

∆t

set initial parameter value p0,

set ﬁnal parameter value pf,

set t=0,

set constant 0 <α1<1.

Output: primal variable χand dual variables λ,µalong the path

1for k←1to Ndo

2Compute step ∆p=pk−pk−1

3Solve QP problem ; /* to obtain ∆χ,∆λ,∆µ*/

4if QP is feasible then

5/* perform update */

6χ←χ+∆χ;/* update primal variables */

7Update dual variables appropriately; using Equations (8) and 9for the pure-predictor

method or (14) and (15) for the predictor-corrector method

8t←t+∆t;/* update stepsize */

9k←k+1

10 else

11 /* QP is infeasible, reduce QP stepsize */

12 4t←α14t

13 t←t−α14t

3.3. Discussion of the Path-Following asNMPC Approach

In this section, we discuss some characteristics of the path-following asNMPC approach presented

in this paper. We also present a small example to demonstrate the effect of including the strongly-active

constraints as equality constraints in the QP.

A reader who is familiar with the real-time iteration scheme [

15

] will have realized that the

QPs

(12)

that are solved in our path-following algorithm are similar to the ones proposed and solved in

the real-time iteration scheme. However, there are some fundamental differences between the standard

real-time iteration scheme as described in [15] and the asNMPC with a path-following approach.

This work is set in the advanced-step NMPC framework, i.e., at every time step, the full NLP is

solved for a predicted state. When the new measurement becomes available, the precomputed NLP

solution is updated by tracking the optimal solution curve from the predicted initial state to the new

measured or estimated state. Any numerical homotopy algorithm can be used to update the NLP

solution, and we have presented a suitable one in this paper. Note that the solution of the last QP

along the path corresponds to the updated NLP solution, and only the inputs computed in this last QP

will be injected into the plant.

The situation is quite different in the real time iteration (RTI) scheme described in [

15

]. Here,

the NLP is not solved at all during the MPC sampling times. Instead, at each sampling time, a single

QP is solved, and the computed input is applied to the plant. This will require very fast sampling

times, and if the QP fails to track the true solution due to very large disturbances, similar measures

as in the advanced-step NMPC procedure (i.e., solving the full NLP) must be performed to get the

controller “on track” again. Note that the inputs computed from every QP are applied to the plant and,

not as in our path-following asNMPC, only the input computed in the last QP along the homotopy.

Finally, in the QPs of the previously published real-time iteration schemes [

15

], all inequality

constraints are linearized and included as QP inequality constraints. Our approach in this paper,

however, distinguishes between strongly- and weakly-active inequality constraints. Strongly-active

Processes 2017,5, 8 9 of 18

inequalities are included as linearized equality constraints in the QP, while weakly-active constraints

are linearized and added as inequality constraints to the QP. This ensures that the true solution path is

tracked more accurately also when the full Hessian of the optimization problem becomes non-convex.

We illustrate this in the small example below.

Example 1. Consider the following parametric “NLP”

min

xx2

1−x2

2(16)

s.t. −2−x2+t≤0

−2+x2

1+x2≤0,

for which we have plotted the constraints at t =0in Figure 1a.

x1

x2

-2

2

2-2

-1

1

3-3 1-1

3

-3

g1:−2−x2+t≤0

g2:−2+x2

1+x2≤0

(1,-2)

(a)

x1

x2

-2

2

2-2

-1

1

3-3 1-1

3

-3

g1

g2

3−2∆x1−∆x2≥0

(1,-2)

(b)

Figure 1.

(

a

) Constraints of NLP

(16)

in Example 1 and (

b

) their linearization at

ˆ

x= (

1,

−

2

)

and

t=

0.

The feasible region lies in between the parabola and the horizontal line. Changing the parameter

t

from zero

to one moves the lower constraint up from x2=−2to x2=−1.

The objective gradient is

∇F(x)=(

2

x1

,

−

2

x2)

, and the Hessian of the objective is always indeﬁnite

H= 2 0

0−2!

. The constraint gradients are

∇g(x) = 0−1

2x11!

. For

t∈[

0, 1

]

, a (local) primal solution

is given by

x∗(t)=(0, t−2)

. The ﬁrst constraint is active, the second constraint is inactive, and the dual

solution is

λ∗(t)=(−2x2, 0)

. At

t=

0we thus have the optimal primal solution

x∗= (

0,

−

2

)

and the

optimal multiplier λ∗= (4, 0).

We consider starting from an approximate solution at the point

ˆ

x(t=

0

) = (

1,

−

2

)

with dual variables

ˆ

λ(t=

0

) = (

4, 0

)

, such that the ﬁrst constraint is strongly active, while the second one remains inactive. The

linearized constraints for this point are shown in Figure 1b. Now, consider a change

∆t=

1, going from

t=

0

to t =1.

The pure predictor QP

(10)

has the form, recalling that we enforce the strongly active constraint as equality:

min

∆x

∆x2

1−∆x2

2

s.t. −∆x2+1=0.

(17)

This QP is convex with a unique solution ∆x= (0, 1)resulting in the subsequent point ˆ

x(t=1) = (1, −1).

Processes 2017,5, 8 10 of 18

The predictor-corrector QP

(12)

, which includes a linear term in the objective that acts as a corrector, is

given for this case as

min

∆x

∆x2

1−∆x2

2+2∆x1+4∆x2

s.t. −∆x2+1=0

−3+2∆x1+∆x2≤0.

(18)

Again this QP is convex with a unique primal solution

∆x= (−

1, 1

)

. The step computed by this predictor

corrector QP moves the update to the true optimal solution ˆ

x(t=1) = (0, −1) = x∗(t=1).

Now, consider a third QP, which is the predictor-corrector QP

(12)

, but without enforcing the strongly

active constraints as equalities. That is, all constraints are included in the QP as they were in the original

NLP (16),

min

∆x

∆x2

1−∆x2

2+2∆x1+4∆x2

s.t. −∆x2+1≤0

−3+2∆x1+∆x2≤0.

(19)

This QP is non-convex and unbounded; we can decrease the objective arbitrarily by setting

∆x= (

1.5

−

0.5

r

,

r)

and letting a scalar

r≥

1go to inﬁnity. Although there is a local minimizer at

∆x= (−

1, 1

)

, a QP

solver that behaves “optimally” should ﬁnd the unbounded “solution”.

This last approach cannot be expected to work reliably if the full Hessian of the optimization problem

may become non-convex, which easily can be the case when optimizing economic objective functions. We note,

however, that if the Hessian

∇xxL

is positive deﬁnite, QP it is not necessary to enforce the strongly active

constraints as equality constraints in the predictor-corrector QP (12).

4. Numerical Case Study

4.1. Process Description

We demonstrate the path-following NMPC (pf-NMPC) on an isothermal reactor and separator

process depicted in Figure 2. The continuously-stirred tank reactor (CSTR) is fed with a stream

F0

containing 100% component

A

and a recycler

R

from the distillation column. A ﬁrst-order reaction

A → B

takes place in the CSTR where

B

is the desired product and the product with ﬂow rate

F

is fed

to the column. In the distillation column, the unreacted raw material is separated from the product and

recycled into the reactor. The desired product

B

leaves the distillation column as the bottom product,

which is required to have a certain purity. Reaction kinetic parameters for the reactor are described

in Table 1. The distillation column model is taken from [

29

]. Table 2summarizes the parameters

used in the distillation. In total, the model has 84 state variables of which 82 are from the distillation

(concentration and holdup for each stage) and two from the CSTR (one concentration and one holdup).

VersionFebruary 3, 2017 submitted to Processes 11 of 19

F0

MB

MD

XF

VB

LT

D

B

F

Figure 2. Diagram of CSTR and Distillation Column

Table 2. Distillation column A parameters

Parameter Value

αAB 1.5

number of stages 41

feed stage location 21

The stage cost of the economic objective function to optimize under operation is

J=pFF0+pVVB−pBB, (20)

where pFis the feed cost, pVis steam cost, and pBis the product price. The price setting is pF=

1 $/kmo l,pV=0.02 $/k mol,pB=2 $/kmo l. The operational constraints are the concentration of

the bottom product (xB≤0.1), as well as the liquid holdup at the bottom and top of the distillation

column and in the CSTR (0.3 ≤M{B,D,C STR}≤0.7) kmol. The control inputs are reﬂux ﬂow (LT),

boilup ﬂow (VB), feeding rate to the distillation (F), distillate (top) and bottom product ﬂow rates (D

and B). These control inputs have bound constraints as follows

0.1

0.1

0.1

0.1

0.1

≤

LT

VB

F

D

B

≤

10

4.008

10

1.0

1.0

[kmol /min].

First, we run a steady-state optimization with the following feed rate F0=0.3 [kmol/min].

This gives us the optimal values for control inputs and state variables. The optimal steady state

input values are us=h1.18 1.92 1.03 0.74 0.29 iT. The optimal state and control inputs are

used to construct regularization term added to the objective function (20). Now, the regularized stage

becomes

Jm=pFF0+pVVB−pBB−pDD+(z−xs)TQ1(z−xs)+(v−us)TQ2(v−us). (21)

The weights Q1and Q2are selected to make the rotated stage cost of the steady state problem strongly

convex, for details see [22]. This is done to obtain an economic NMPC controller that is stable.

Figure 2. Diagram of continuously-stirred tank reactor (CSTR) and distillation column.

Processes 2017,5, 8 11 of 18

Table 1. Reaction kinetics parameters.

Reaction Reaction Rate Constant (min−1) Activation Energy (in J/mol)

A → B 1×1086×104

Table 2. Distillation Column A parameters.

Parameter Value

αAB 1.5

number of stages 41

feed stage location 21

The stage cost of the economic objective function to optimize under operation is:

J=pFF0+pVVB−pBB, (20)

where

pF

is the feed cost,

pV

is the steam cost and

pB

is the product price. The price setting is

pF=1 $/kmol,pV=0.02 $/kmol,pB=2 $/kmol

. The operational constraints are the concentration

of the bottom product (

xB≤

0.1), as well as the liquid holdup at the bottom and top of the distillation

column and in the CSTR (0.3

≤M{B,D,CST R}≤

0.7 kmol). The control inputs are reﬂux ﬂow (

LT

),

boil-up ﬂow (

VB

), feeding rate to the distillation (

F

), distillate (top) and bottom product ﬂow rates

(Dand B). These control inputs have bound constraints as follows:

0.1

0.1

0.1

0.1

0.1

≤

LT

VB

F

D

B

≤

10

4.008

10

1.0

1.0

(kmol/min).

First, we run a steady-state optimization with the following feed rate F0=0.3 (kmol/min). This

gives us the optimal values for control inputs and state variables. The optimal steady state input

values are

us=h1.18 1.92 1.03 0.74 0.29 iT

. The optimal state and control inputs are used to

construct regularization term added to the objective function

(20)

. Now, the regularized stage becomes:

Jm=pFF0+pVVB−pBB−pDD+(z−xs)TQ1(z−xs)+(v−us)TQ2(v−us). (21)

The weights

Q1

and

Q2

are selected to make the rotated stage cost of the steady state problem

strongly convex; for details, see [

24

]. This is done to obtain an economic NMPC controller that is stable.

Secondly, we set up the NLP for calculating the predicted state variables

z

and predicted control

inputs

v

. We employ a direct collocation approach on ﬁnite elements using Lagrange collocation to

discretize the dynamics, where we use three collocation points in each ﬁnite element. By using the

direct collocation approach, the state variables and control inputs become optimization variables.

The economic NMPC case study is initialized with the steady state values for a production rate

F0=

0.29 kmol/min, such that the economic NMPC controller is effectively controlling a throughput

change from

F0=

0.29 kmol/min to

F0=

0.3 kmol/min. We simulate 150 MPC iterations, with

a sample time of 1 min. The prediction horizon of the NMPC controller is set to 30 min. This setting

results in an NLP with 10,314 optimization variables. We use CasADi [

30

] (Version 3.1.0-rc1) with

IPOPT [31] as the NLP solver. For the QPs, we use MINOS QP [32] from TOMLAB.

Processes 2017,5, 8 12 of 18

4.2. Comparison of the Open-Loop Optimization Results

In this section, we compare the solutions obtained from the path-following algorithm with

the “true” solution of the optimization problem

Pnmpc

obtained by solving the full NLP. To do this,

we consider the second MPC iteration, where the path-following asNMPC is used for the ﬁrst time to

correct the one-sample ahead-prediction (in the ﬁrst MPC iteration, to start up the asNMPC procedure,

the full NLP is solved twice). We focus on the interesting case where the predicted state is corrupted by

noise, such that the path-following algorithm is required to update the solution. In Figure 3, we have

plotted the difference between a selection of predicted states, obtained by applying the path-following

NMPC approaches, and the ideal NMPC approach.

Version February 3, 2017 submitted to Processes 12 of 19

0 20 40 60 80 100 120

collocation points

-4

-3

-2

-1

0

1

Difference [-]

10-4 Distillation: Top composition

pf-NMPC one-step pure-predictor 1% noise

pf-NMPC one-step predictor-corrector 1% noise

pf-NMPC four-step pure-predictor 1% noise

pf-NMPC four-step predictor-corrector 1% noise

0 20 40 60 80 100 120

collocation points

-1

0

1

2

3

4

5

6

Difference [-]

10-4 Distillation: Bottom composition

0 20 40 60 80 100 120

collocation points

-6

-4

-2

0

2

4

Difference [-]

10-4 CSTR: Concentration

0 20 40 60 80 100 120

collocation points

-2

-1

0

1

2

3

Difference [kmol]

10-3 CSTR: Holdup

Figure 3. The difference in predicted states variables between iNMPC and pf-NMPC from second

iteration.

Secondly, we set up the NLP for calculating the predicted state variables zand predicted control

inputs v. We employ a direct collocation approach on ﬁnite elements using Lagrange collocation to

discretize the dynamics, where we use three collocation points in each ﬁnite element. By using the

direct collocation approach, the state variables and control inputs become optimization variables.

The economic NMPC case study is initialized with the steady state values for a production rate

F0=0.29 kmol/min, such that the Economic NMPC controller is effectively controlling a throughput

change from F0=0.29 kmol/min to F0=0.3 kmol/min. We simulate 150 MPC iterations, with a

sample time of 1 minute. The prediction horizon of the NMPC controller is set to 30 minutes. This

setting results in an NLP with 10314 optimization variables. We use CasADi [38] (version 3.1.0-rc1)

with IPOPT [23] as NLP solver. For the QPs, we use MINOS QP [39] from TOMLAB.

4.2. Comparison of Open-loop Optimization Results

In this section we compare the solutions obtained from the pathfollowing algorithm with the

“true” solution of the optimization problem Pnmpc obtained by solving the full NLP. To do this,

we consider the second MPC iteration, where the pathfollowing asNMPC is used for the ﬁrst time

to correct the one-sample ahead-prediction (in the ﬁrst MPC iteration, to start up the asNMPC

procedure, the full NLP is solved twice). We focus on the interesting case where the predicted state

is corrupted by noise, such that the pathfollowing algorithm is required to update the solution.

In Figure 3we have plotted the difference between a selection of predicted states, obtained by

applying the pathfollowing NMPC approaches, and the ideal NMPC approach. We observe that

the one-step pure-predictor tracks the ideal NMPC solution worst, and the four-step pathfollowing

with predictor-corrector tracks best. This happens because the predictor-corrector pathfollowing QP

has an additional linear term in the objective function and constraint for the purpose of moving closer

to the solution of the NLP (the "corrector" component), as well as tracing the ﬁrst order estimate of

the change in the solution (the "predictor"). The four-step pathfollowing performs better because a

smaller step size gives ﬁner approximation of the parametric NLP solution.

Figure 3.

The difference in predicted state variables between ideal NMPC (iNMPC) and path-following NMPC

(pf-NMPC) from the second iteration.

We observe that the one-step pure-predictor tracks the ideal NMPC solution worst and

the four-step path-following with predictor-corrector tracks best. This happens because the

predictor-corrector path-following QP has an additional linear term in the objective function and

constraint for the purpose of moving closer to the solution of the NLP (the “corrector” component),

as well as tracing the ﬁrst-order estimate of the change in the solution (the “predictor”). The four-step

path-following performs better because a smaller step size gives ﬁner approximation of the parametric

NLP solution.

This is also reﬂected in the average approximation errors given in Table 3. The average approximation

error has been calculated by averaging the error one-norm

χpath−f ol lowin g −χideal NMPC 1

over all

MPC iterations.

We observe that in this case study, the accuracy of a single predictor-corrector step is almost as

good as performing four predictor-corrector steps along the path. That is, a single predictor-corrector

QP update may be sufﬁcient for this application. However, in general, in the presence of larger noise

magnitudes and longer sampling intervals, which cause poorer predictions, a single-step update may

no longer lead to good approximations. We note the large error in the pure-predictor path-following

method for the solution accuracy of several orders of magnitude.

Processes 2017,5, 8 13 of 18

On the other hand, given that the optimization vector

χ

has dimension 10,164 for our case

study, the average one-norm approximation error of ca. 4.5 does result in very small errors on the

individual variables.

Table 3.

Approximation error using path-following algorithms. asNMPC, advanced-step NMPC

(asNMPC); QP, quadratic programming.

Average Approximation Error between ideal NMPC and Path-Following (PF) asNMPC

PF with predictor QP, 1 step 4.516

PF with predictor QP, 4 steps 4.517

PF with predictor-corrector QP, 1 step 1.333 ×10−2

PF with predictor-corrector QP, 4 steps 1.282 ×10−2

4.3. Closed-Loop Results: No Measurement Noise

In this section, we compare the results for closed loop process operation. We consider ﬁrst the case

without measurement noise, and we compare the results for ideal NMPC with the results obtained

by the path-following algorithm with the pure-predictor QP

(10)

and the predictor-corrector QP

(12)

.

Figure 4shows the trajectories of the top and bottom composition in the distillation column and the

reactor concentration and holdup. Note that around 120 min, the bottom composition constraint in the

distillation column becomes active, while the CSTR holdup is kept at its upper bound all of the time

(any reduction in the holdup will result in economic and product loss).

Version February 3, 2017 submitted to Processes 14 of 19

0 50 100 150

Time [minutes]

0.8

0.85

0.9

0.95

Concentration [-]

Distillation: Top composition

iNMPC

pf-NMPC one-step pure-predictor

pf-NMPC one-step predictor-corrector

pf-NMPC four-step pure-predictor

pf-NMPC four-step predictor-corrector

0 50 100 150

Time [minutes]

0

0.05

0.1

0.15

Concentration [-]

Distillation: Bottom composition

0 50 100 150

Time [minutes]

0.65

0.66

0.67

0.68

0.69

0.7

Concentration [-]

CSTR: Concentration

0 50 100 150

Time [minutes]

0.65

0.7

0.75

Holdup [kmol]

CSTR: Holdup

Figure 4. Recycle composition, bottom composition, reactor concentration, and reactor holdup.

0 50 100 150

Time [minutes]

1.1

1.2

1.3

1.4

LT [m3/minute]

reflux flow (LT)

iNMPC

pf-NMPC one-step pure-predictor

pf-NMPC one-step predictor-corrector

pf-NMPC four-step pure-predictor

pf-NMPC four-step predictor-corrector

0 50 100 150

Time [minutes]

0.7

0.75

0.8

0.85

0.9

D [m3/minute]

recycle flowrate (D)

0 50 100 150

Time [minutes]

0.2

0.25

0.3

0.35

B [m3/minute]

bottom flowrate (B)

0 50 100 150

Time [minutes]

1.9

2

2.1

2.2

VB [m3/minute]

boilup flow (VB)

0 50 100 150

Time [minutes]

1

1.05

1.1

1.15

1.2

F [m3/minute]

feeding rate (F)

Figure 5. Optimized control inputs.

Figure 4. Recycle composition, bottom composition, reactor concentration and reactor holdup.

In this case (without noise), the prediction and the true solution only differ due to numerical

noise. There is no need to update the prediction, and all approaches give exactly the same closed-loop

behavior. This is also reﬂected in the accumulated stage cost, which is shown in Table 4.

Processes 2017,5, 8 14 of 18

Table 4. Comparison of economic NMPC controllers (no noise). Accumulated stage cost in $.

Economic NMPC Controller Accumulated Stage Cost

iNMPC −296.42

pure-predictor QP:

pf-NMPC one step −296.42

pf-NMPC four steps −296.42

predictor-corrector QP:

pf-NMPC one step −296.42

pf-NMPC four steps −296.42

The closed-loop control inputs are given in Figure 5. Note here that the feed rate into the

distillation column is adjusted such that the reactor holdup is at its constraint all of the time.

Version February 3, 2017 submitted to Processes 14 of 19

0 50 100 150

Time [minutes]

0.8

0.85

0.9

0.95

Concentration [-]

Distillation: Top composition

iNMPC

pf-NMPC one-step pure-predictor

pf-NMPC one-step predictor-corrector

pf-NMPC four-step pure-predictor

pf-NMPC four-step predictor-corrector

0 50 100 150

Time [minutes]

0

0.05

0.1

0.15

Concentration [-]

Distillation: Bottom composition

0 50 100 150

Time [minutes]

0.65

0.66

0.67

0.68

0.69

0.7

Concentration [-]

CSTR: Concentration

0 50 100 150

Time [minutes]

0.65

0.7

0.75

Holdup [kmol]

CSTR: Holdup

Figure 4. Recycle composition, bottom composition, reactor concentration, and reactor holdup.

0 50 100 150

Time [minutes]

1.1

1.2

1.3

1.4

LT [m3/minute]

reflux flow (LT)

iNMPC

pf-NMPC one-step pure-predictor

pf-NMPC one-step predictor-corrector

pf-NMPC four-step pure-predictor

pf-NMPC four-step predictor-corrector

0 50 100 150

Time [minutes]

0.7

0.75

0.8

0.85

0.9

D [m3/minute]

recycle flowrate (D)

0 50 100 150

Time [minutes]

0.2

0.25

0.3

0.35

B [m3/minute]

bottom flowrate (B)

0 50 100 150

Time [minutes]

1.9

2

2.1

2.2

VB [m3/minute]

boilup flow (VB)

0 50 100 150

Time [minutes]

1

1.05

1.1

1.15

1.2

F [m3/minute]

feeding rate (F)

Figure 5. Optimized control inputs.

Figure 5. Optimized control inputs.

4.4. Closed-Loop Results: With Measurement Noise

Next, we run simulations with measurement noise on all of the holdups in the system. The noise

is taken to have a normal distribution with zero mean and a variance of one percent of the

steady state values. This will result in corrupted predictions that have to be corrected for by the

path-following algorithms. Again, we perform simulations with one and four steps of pure-predictor

and predictor-corrector QPs.

Figure 6shows the top and bottom compositions of the distillation column, together with the

concentration and holdup in the CSTR. The states are obtained under closed-loop operation with the

ideal and path-following NMPC algorithms. Due to noise, it is not possible to avoid the violation of

the active constraints in the holdup of the CSTR and the bottom composition in the distillation column.

This is the case for both the ideal NMPC and the path-following approaches.

Processes 2017,5, 8 15 of 18

Version February 3, 2017 submitted to Processes 15 of 19

0 50 100 150

Time [minute]

0.8

0.85

0.9

0.95

Concentration [-]

Distillation: Top composition

iNMPC 1% noise

pf-NMPC one-step pure-predictor 1% noise

pf-NMPC one-step predictor-corrector 1% noise

pf-NMPC four-step pure-predictor 1% noise

pf-NMPC four-step predictor-corrector 1% noise

0 50 100 150

Time [minutes]

0

0.05

0.1

0.15

Concentration [-]

Distillation: Bottom composition

0 50 100 150

Time [minutes]

0.65

0.66

0.67

0.68

0.69

0.7

Concentration [-]

CSTR: Concentration

0 50 100 150

Time [minutes]

0.5

0.55

0.6

0.65

0.7

0.75

Holdup [kmol]

CSTR: Holdup

Figure 6. Recycle composition, bottom composition, reactor concentration, and reactor holdup.

Table 5. Comparison of economic NMPC controllers. Accumulated stage cost is in $.

economic NMPC controller Acc. stage cost

iNMPC -296.82

pure-predictor QP:

pf-NMPC one step -297.54

pf-NMPC four steps -297.62

predictor-corrector QP:

pf-NMPC one step -296.82

pf-NMPC four steps -296.82

4.4. Closed-loop Results – With Measurement Noise

Next, we run simulations with measurement noise on all the holdups in the system. The

noise is taken to have a normal distribution with zero mean and a variance of one percent of the

steady state values. This will result in corrupted predictions that have to be corrected for by the

pathfollowing algorithms. Again, we perform simulations with one and four steps of pure-predictor

and predictor-corrector QPs.

Figure 6shows the top and bottom compositions of the distillation column, together with the

concentration and holdup in the CSTR. The states are obtained under closed-loop operation with the

ideal and pathfollowing NMPC algorithms. Due to noise it is not possible to avoid violation of the

active constraints in the holdup of the CSTR and the bottom composition in the distillation column.

This is the case for both the ideal NMPC and the pathfollowing approaches.

The input variables shown in Figure 7are also reﬂecting the measurement noise, and again we

see that the fast sensitivity NMPC approaches are very close to the ideal NMPC inputs.

Finally, we compare the accumulated economic stage cost in Table 5. Here we observe that our

proposed predictor-corrector pathfollowing algorithm performs identically to the ideal NMPC. This

is as expected, since the predictor-corrector pathfollowing algorithm is trying to reproduce the true

NLP solution. Interestingly, in this case, the larger error in the pure predictor pathfollowing NMPC

Figure 6. Recycle composition, bottom composition, reactor concentration and reactor holdup.

The input variables shown in Figure 7are also reﬂecting the measurement noise, and again, we

see that the fast sensitivity NMPC approaches are very close to the ideal NMPC inputs.

Version February 3, 2017 submitted to Processes 16 of 19

0 50 100 150

Time [minutes]

1.1

1.2

1.3

1.4

LT [m3/minute]

reflux flow (LT)

iNMPC

pf-NMPC one-step pure-predictor

pf-NMPC one-step predictor-corrector

pf-NMPC four-step pure-predictor

pf-NMPC four-step predictor-corrector

0 50 100 150

Time [minutes]

0.7

0.8

0.9

D [m3/minute]

recycle flowrate (D)

0 50 100 150

Time [minutes]

0.2

0.25

0.3

0.35

B [m3/minute]

bottom flowrate (B)

0 50 100 150

Time [minutes]

1.9

2

2.1

2.2

VB [m3/minute]

boilup flow (VB)

0 50 100 150

Time [minutes]

1

1.1

1.2

F [m3/minute]

feeding rate (F)

Figure 7. Optimized control inputs.

leads to a better economic performance of the closed loop system. This behavior is due to the fact

that the random measurement noise can have positive and negative effect on the operation, which is

not taken into account by the ideal NMPC (and also the predictor-corrector NMPC). In this case, the

inaccuracy of the pure predictor pathfollowing NMPC led to better economic performance in closed

loop. But it could also have been the opposite.

5. Discussion and Conclusion

We applied the pathfollowing ideas developed in Jäschke et al. [22] and Kungurtsev and

Diehl [30] to a large-scale process containing a reactor, a distillation column and a recycle stream.

Compared with single-step updates based on solving a linear system of equations as proposed by [14]

our pathfollowing approach requires somewhat more computational effort. However, the advantage

of the pathfollowing approach is that active set changes are handled rigorously. Moreover, solving a

sequence of a few QPs can be expected to be much faster than solving the full NLP, especially since

they can be initialized very well, such that the computational delay between obtaining the new state

and injecting the updated input into the plant is still sufﬁciently small. In our computations, we have

considered a ﬁxed step-size for the pathfollowing, such that the number of QPs to be solved is known

in advance.

The case without noise does not require the pathfollowing algorithm to correct the solution,

because the prediction and the true measurement are identical, except for numerical noise. However,

when measurement noise is added to the holdups, the situation becomes different. In this case, the

prediction and the measurements differ, such that an update is required. All four approaches track the

ideal NMPC solution to some degree, however, in terms of accuracy the predictor-corrector performs

consistently better. Given that the pure sensitivity QP and the predictor-corrector QP are very similar

in structure, it is recommended to use the latter in the pathfollowing algorithm, especially for highly

nonlinear processes and cases with signiﬁcant measurement noise.

We have presented basic algorithms for pathfollowing, and they seem to work well for the

cases we have studied, such that the pathfollowing algorithms do not diverge from the true

solution. In principle, however, the path-following algorithms may get lost, and more sophisticated

implementations need to include checks and safeguards. We note, however, that the application of

Figure 7. Optimized control inputs.

Finally, we compare the accumulated economic stage cost in Table 5.

Processes 2017,5, 8 16 of 18

Table 5. Comparison of economic NMPC controllers (with noise). Accumulated stage cost in $.

Economic NMPC Controller Accumulated Stage Cost

iNMPC −296.82

pure-predictor QP:

pf-NMPC one step −297.54

pf-NMPC four steps −297.62

predictor-corrector QP:

pf-NMPC one step −296.82

pf-NMPC four steps −296.82

Here, we observe that our proposed predictor-corrector path-following algorithm performs

identically to the ideal NMPC. This is as expected, since the predictor-corrector path-following

algorithm is trying to reproduce the true NLP solution. Interestingly, in this case, the larger error in

the pure predictor path-following NMPC leads to a better economic performance of the closed loop

system. This behavior is due to the fact that the random measurement noise can have a positive and

a negative effect on the operation, which is not taken into account by the ideal NMPC (and also the

predictor-corrector NMPC). In this case, the inaccuracy of the pure-predictor path-following NMPC

led to better economic performance in the closed loop. However, it could also have been the opposite.

5. Discussion and Conclusions

We applied the path-following ideas developed in Jäschke et al. [

24

] and Kungurtsev and Diehl [

28

]

to a large-scale process containing a reactor, a distillation column and a recycle stream. Compared

with single-step updates based on solving a linear system of equations as proposed by [

10

], our

path-following approach requires somewhat more computational effort. However, the advantage

of the path-following approach is that active set changes are handled rigorously. Moreover, solving

a sequence of a few QPs can be expected to be much faster than solving the full NLP, especially since

they can be initialized very well, such that the computational delay between obtaining the new state

and injecting the updated input into the plant is still sufﬁciently small. In our computations, we have

considered a ﬁxed step-size for the path-following, such that the number of QPs to be solved is known

in advance.

The case without noise does not require the path-following algorithm to correct the solution,

because the prediction and the true measurement are identical, except for numerical noise. However,

when measurement noise is added to the holdups, the situation becomes different. In this case, the

prediction and the measurements differ, such that an update is required. All four approaches track the

ideal NMPC solution to some degree; however, in terms of accuracy, the predictor-corrector performs

consistently better. Given that the pure sensitivity QP and the predictor-corrector QP are very similar

in structure, it is recommended to use the latter in the path-following algorithm, especially for highly

nonlinear processes and cases with signiﬁcant measurement noise.

We have presented basic algorithms for path-following, and they seem to work well for the

cases we have studied, such that the path-following algorithms do not diverge from the true

solution. In principle, however, the path-following algorithms may get lost, and more sophisticated

implementations need to include checks and safeguards. We note, however, that the application of the

path-following algorithm in the advanced-step NMPC framework has the desirable property that the

solution of the full NLP acts as a corrector, such that if the path-following algorithm diverges from the

true solution, this will be most likely for only one sample time, until the next full NLP is solved.

The path-following algorithm in this paper (and the corresponding QPs) still relies on the

assumption of linearly-independent constraint gradients. If there are path-constraints present in

the discretized NLP, care must be taken to formulate them in such a way that LICQ is not violated.

Processes 2017,5, 8 17 of 18

In future work, we will consider extending the path-following NMPC approaches to handle more

general situations with linearly-dependent inequality constraints.

Acknowledgments:

Vyacheslav Kungurtsev was supported by the Czech Science Foundation Project 17-26999S.

Eka Suwartadi and Johannes Jäschke are supported by the Research Council of Norway Young Research Talent

Grant 239809.

Author Contributions:

V.K. and J.J. contributed the algorithmic ideas for the paper; E.S. implemented the

algorithm and simulated the case study; E.S. primarily wrote the paper, with periodic assistance from V.K.; J.J.

supervised the work, analyzed the simulation results and contributed to writing and correcting the paper.

Conﬂicts of Interest: The authors declare no conﬂict of interest.

References

1.

Zanin, A.C.; Tvrzská de Gouvêa, M.; Odloak, D. Industrial implementation of a real-time optimization

strategy for maximizing production of LPG in a FCC unit. Comput. Chem. Eng. 2000,24, 525–531.

2.

Zanin, A.C.; Tvrzská de Gouvêa, M.; Odloak, D. Integrating real-time optimization into the model predictive

controller of the FCC system. Control Eng. Pract. 2002,10, 819–831,

3.

Rawlings, J.B.; Amrit, R. Optimizing process economic performance using model predictive control.

In Nonlinear Model Predictive Control; Springer: Berlin/Heidelberg, Germany, 2009; Volume 384, pp. 119–138.

4.

Rawlings, J.B.; Angeli, D.; Bates, C.N. Fundamentals of economic model predictive control. In Proceeding of the

51st IEEE Conference on Conference on Decision and Control (CDC), Maui, HI, USA, 10–13 December 2012.

5.

Ellis, M.; Durand, H.; Christoﬁdes, P.D. A tutorial review of economic model predictive methods.

J. Process Control 2014,24, 1156–1178.

6.

Tran, T.; Ling, K.-V.; Maciejowski, J.M. Economic model predictive control—A review. In Proceeding of the

31st ISARC, Sydney, Australia, 9–11 July 2014.

7.

Angeli, D.; Amrit, R.; Rawlings, J.B. On average performance and stability of economic model predictive

control. IEEE Trans. Autom. Control 2012,57, 1615–1626.

8.

Idris, E.A.N.; Engell, S. Economics-based NMPC strategies for the operation and control of a continuous

catalytic distillation process. J. Process Control 2012,22, 1832–1843.

9.

Findeisen, R.; Allgöwer, F. Computational delay in nonlinear model predictive control. In Proceeding of the

International Symposium on Advanced Control of Chemical Proceses (ADCHEM’03), Hongkong, China,

11–14 January 2004.

10.

Zavala, V.M.; Biegler, L.T. The advanced-step NMPC controller: Optimality, stability, and robustness.

Automatica 2009,45, 86–93.

11.

Diehl, M.; Bock, H.G.; Schlöder, J.P. A real-time iteration scheme for nonlinear optimization in optimal

feedback control. SIAM J. Control Optim. 2005,43, 1714–1736.

12.

Würth, L.; Hannemann, R.; Marquardt, W. Neighboring-extremal updates for nonlinear model-predictive

control and dynamic real-time optimization. J. Process Control 2009,19, 1277–1288.

13.

Biegler, L.T.; Yang, X.; Fischer, G.A.G. Advances in sensitivity-based nonlinear model predictive control and

dynamic real-time optimization. J. Process Control 2015,30, 104–116.

14.

Wolf, I.J.; Marquadt, W. Fast NMPC schemes for regulatory and economic NMPC—A review. J. Process Control

2016,44, 162–183.

15.

Diehl, M.; Bock, H.G.; Schlöder, J.P.; Findeisen, R.; Nagy, Z.; Allgöwer, F. Real-time optimization and nonlinear

model predictive control of processes governed by differential-algebraic equations. J. Process Control

2002

,

12, 577–585.

16.

Gros, S.; Quirynen, R.; Diehl, M. An improved real-time economic NMPC scheme for Wind Turbine control

using spline-interpolated aerodynamic coefﬁcients. In Proceedings of the 53rd IEEE Conference on Decision

and Control, Los Angeles, CA, USA, 15–17 December 2014; pp. 935–940.

17.

Gros, S.; Vukov, M.; Diehl, M. A real-time MHE and NMPC scheme for wind turbine control. In Proceedings

of the 52nd IEEE Conference on Decision and Control, Firenze, Italy, 10–13 December 2013; pp. 1007–1012.

18.

Ohtsuka, T. A continuation/GMRES method for fast computation of nonlinear receding horizon control.

Automatica 2004,40, 563–574.

19.

Li, W.C.; Biegler, L.T. Multistep, Newton-type control strategies for constrained nonlinear processes.

Chem. Eng. Res. Des. 1989,67, 562–577.

Processes 2017,5, 8 18 of 18

20.

Pirnay, H.; López-Negrete, R.; Biegler, L.T. Optimal sensitivity based on IPOPT. Math. Program. Comput.

2002,4, 307–331.

21.

Yang, X.; Biegler, L.T. Advanced-multi-step nonlinear model predictive control. J. Process Control

2013

,

23, 1116–1128.

22.

Kadam, J.; Marquardt, W. Sensitivity-based solution updates in closed-loop dynamic optimization.

In Proceedings of the DYCOPS 7 Conference, Cambridge, MA, USA, 5–7 July 2004.

23.

Würth, L.; Hannemann, R.; Marquardt, W. A two-layer architecture for economically optimal process control

and operation. J. Process Control 2011,21, 311–321.

24.

Jäschke, J.; Yang, X.; Biegler, L.T. Fast economic model predictive control based on NLP-sensitivities.

J. Process Control 2014,24, 1260–1272.

25.

Fiacco, A.V. Introduction to Sensitivity and Stability Analysis in Nonlinear Programming; Academic Press:

New York, NY, USA, 1983.

26.

Bonnans, J.F.; Shapiro, A. Optimization problems with perturbations: A guided tour. SIAM Rev.

1998

,

40, 228–264.

27. Levy, A.B. Solution sensitivity from general principles. SIAM J. Control Optim. 2001,40, 1–38.

28.

Kungurtsev, V.; Diehl, M. Sequential quadratic programming methods for parametric nonlinear optimization.

Comput. Optim. Appl. 2014,59, 475–509.

29.

Skogestad, S.; Postlethwaite, I. Multivariate Feedback Control: Analysis and Design; Wiley-Interscience:

Hoboken, NJ, USA, 2005.

30.

Andersson, J. A General Purpose Software Framework for Dynamic Optimization. Ph.D. Thesis, Arenberg

Doctoral School, KU Leuven, Leuven, Belgium, October 2013.

31.

Wächter, A.; Biegler, L.T. On the implementation of an interior-point ﬁlter line-search algorithm for large-scale

nonlinear programming. Math. Program. 2006,106, 25–57.

32.

Murtagh, B.A.; Saunders, M.A. A projected lagrangian algorithm and its implementation for sparse nonlinear

constraints. Math. Program. Study 1982,16, 84–117.

c

2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access

article distributed under the terms and conditions of the Creative Commons Attribution

(CC BY) license (http://creativecommons.org/licenses/by/4.0/).