## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

Conference Paper

A sequential quadratic programming method is proposed for solving nonlinear optimal control problems subject to general path constraints including mixed state-control and state-only constraints. The proposed algorithm formulates linear quadratic optimal control subproblems with a solution that provides a descent direction for a non-differentiable exact penalty function. A set of conditions is given under which the minimization of the merit function produces a sequence of controls with limit points that satisfy the first order necessary conditions of the optimal control problem. The subproblems solved at each step of the algorithm inherit the structure of the nonlinear optimal control problem and can be solved efficiently via Riccati methods.

To read the full-text of this research,

you can request a copy directly from the authors.

... Recent research proposes the use of Sequential Quadratic Programs (SQPs) employing sequential linearizations of the constraints in the NL-OCP in order to efficiently solve it [27]. However, this approach converges to a local optimum of the NL-OCP defined in (3), if and only if the system is locally N-step controllable [27]. ...

... Recent research proposes the use of Sequential Quadratic Programs (SQPs) employing sequential linearizations of the constraints in the NL-OCP in order to efficiently solve it [27]. However, this approach converges to a local optimum of the NL-OCP defined in (3), if and only if the system is locally N-step controllable [27]. Additionally, the terminal equality constraint is only satisfied as the number of 1 Although x x x r and u u u r may be obtained by any approach, an auxiliary optimization problem to obtain these is provided in Appendix A. ...

... Algorithm 1 Dual-mode model predictive / linear control 1: OFFLINE 2: Select µ and compute K k , P k using Lemma 1 3: Compute γ k using Lemma 2 4: Select δ and compute ε 1 k using Proposition 1 Utilize an SQP procedure [27] to produce a feasible virtual control sequenceũ u u * k . ...

In this paper, a dual-mode model predictive/linear control method is presented, which extends the concept of dual-mode model predictive control (MPC) to trajectory tracking control of nonlinear dynamic systems described by discrete-Time state-space models. The dual-mode controller comprises of a time-varying linear control law, implemented when the states lie within a sufficiently small neighborhood of the reference trajectory, and a model predictive control strategy driving the system toward that neighborhood. The boundary of this neighborhood is characterized so as to ensure stability of the closed-loop system and terminate the optimization procedure in a finite number of iterations, without jeopardizing the stability of the closed-loop system. The developed controller is applied to the central air handling unit (AHU) of a two-zone variable air volume (VAV) heating, ventilation, and air conditioning (HVAC) system.

... In particular, we focus our attention on SQP methods solving optimal control problems. In [8], an SQP method is introduced solving for the optimal input variation. By applying the optimized input trajectory to the system dynamics, new state trajectories are derived. ...

... where ζ + is the component of the variation γ + on the tangent space T ξ i T according to (8), and where the definition for O is used for the estimate of the last term. Thus, it follows by using the triangular inequality ...

In this paper, we propose a discrete-time Sequential Quadratic Programming (SQP) algorithm for nonlinear optimal control problems. Using the idea by Hauser of projecting curves onto the trajectory space, the introduced algorithm has guaranteed recursive feasibility of the dynamic constraints. The second essential feature of the algorithm is a specific choice of the Lagrange multiplier update. Due to this ad hoc choice of the multiplier, the algorithm converges locally quadratically. Finally, we show how the proposed algorithm connects standard SQP methods for nonlinear optimal control with the Projection Operator Newton method by Hauser.

... In the numerical solution of differential equations, polynomial or piecewise polynomial functions are often used to represent the approximate solution [1]. Legendre and Chebyshev polynomials are used for solving optimal control problems (see [2], [3], [4] and [5]). Razzaghi and Yousefi [6] defined functions which they called Legendre wavelets for solving constrained optimal control problem. ...

Acomputational method based on Bézier control points is presented to solve optimal
control problems governed by time varying linear dynamical systems subject to terminal state
equality constraints and state inequality constraints. The method approximates each of the system
state variables and each of the control variables by a Bézier curve of unknown control points.
The new approximated problems converted to a quadratic programming problem which can be
solved more easily than the original problem. Some examples are given to verify the efficiency
and reliability of the proposed method

A direct trajectory optimization and costate estimation by means of an orthogonal collocation method is discussed. The Gauss pseudospectoral method has been described for solving optimal control problems numerically. The continuous time optimal problem in a direct method is transcribed to a nonlinear programming problem (NLP), which can be solved numerically by well-developed algorithms that attempt to satisfy a set of conditions associated with the NLP. The KKT conditions from the NLP obtained through the Gauss pseudospectral discretization are identical to the variational conditions of the continuous time optimal control problem discretized through the Gauss pseudospectral method. The KKT mutipliers of the NLP can be used to obtain an accurate estimate of the costate at both the Legendre-Gauss points and the boundary points. The results thereby viability of the Gauss pseudospectral method as a means of obtaining accurate solutions to continuous-time optimal control problems.

This overview paper reviews numerical methods for solution of optimal control problems in real-time, as they arise in nonlinear
model predictive control (NMPC) as well as in moving horizon estimation (MHE). In the first part, we review numerical optimal
control solution methods, focussing exclusively on a discrete time setting. We discuss several algorithmic ”building blocks”
that can be combined to a multitude of algorithms. We start by discussing the sequential and simultaneous approaches, the
first leading to smaller, the second to more structured optimization problems. The two big families of Newton type optimization
methods, Sequential Quadratic Programming (SQP) and Interior Point (IP) methods, are presented, and we discuss how to exploit
the optimal control structure in the solution of the linear-quadratic subproblems, where the two major alternatives are “condensing”
and band structure exploiting approaches. The second part of the paper discusses how the algorithms can be adapted to the
real-time challenge of NMPC and MHE. We recall an important sensitivity result from parametric optimization, and show that
a tangential solution predictor for online data can easily be generated in Newton type algorithms. We point out one important
difference between SQP and IP methods: while both methods are able to generate the tangential predictor for fixed active sets,
the SQP predictor even works across active set changes. We then classify many proposed real-time optimization approaches from
the literature into the developed categories.

In this overview paper, we ﬁrst survey numerical approaches to solve nonlinear optimal control problems, and second, we present our most recent algorithmic developments for real-time optimization in nonlinear model predictive control. In the survey part, we discuss three direct optimal control approaches in detail: (i) single shooting, (ii) collocation, and (iii) multiple shooting, and we specify why we believe the direct multiple shooting method to be the method of choice for nonlinear optimal control problems in robotics. We couple it with an eﬃcient robot model generator and show the performance of the algorithm at the example of a ﬁve link robot arm. In the real-time optimization part, we outline the idea of nonlinear model predictive control and the real-time challenge it poses to numerical optimization. As one solution approach, we discuss the real-time iteration scheme.

This chapter discusses the method of multipliers for equality constrained problems. By solving an approximate problem, an approximate solution of the original problem can be obtained. However, if a sequence of approximate problems can be constructed that converges in a well-defined sense to the original problem, then the corresponding sequence of approximate solutions would yield in the limit a solution of the original problem. The basic idea in penalty methods is to eliminate some or all of the constraints and add to the objective function a penalty term that prescribes a high cost to infeasible points. A parameter that determines the severity of the penalty and as a consequence the extent to which the resulting unconstrained problem approximates the original constrained problem is associated with the penalty methods.

Multistep, Newton-type control strategies are developed in order to deal with nonlinear, constrained process control problems. The development parallels the previous development of single step strategies based on operator theory, and this extension includes input and output constraints. In this paper we consider the development of moving time horizon strategies where optimal controller moves are determined from a model linearized about a nominal trajectory, and only the first step is implemented. In a nonlinear analogy to Quadratic Dynamic Matrix Control (QDMC), this approach solves a single quadratic program (QP) over the time horizon and easily incorporates bounds on control inputs and outputs. By invoking concepts from operator theory such as regions of attraction for contraction mappings and descent directions, we also derive sufficient conditions for stability of these methods; these conditions can also be checked on-line. Finally, the effectiveness of this approach is demonstrated on two nonlinear, reactor examples.

We present a Legendre pseudospectral method for directly estimating the costates of the Bolza problem encountered in optimal control theory. The method is based on calculating the state and control variables at the Legendre-Gauss-Lobatto (LGL) points. An Nth degree Lagrange polynomial approximation of these variables allows a conversion of the optimal control problem into a standard nonlinear programming (NLP) problem with the st ate and control values at the LGL points as optimization parameters. By applying the Karush-Kuhn-Tucker (KKT) theorem to the NLP problem, we show that the KKT multipliers satisfy a discrete analog of the costate dynamics including the transversality conditions. Indeed, we prove that the costates at the LGL points are equal to the KKT multipliers divided by the LGL weights. Hence, the direct solution by this method also automatically yields the costates by way of the Lagrange multipliers that can be extracted from an NLP solver. One important advantage of this technique is that it allows a very simple way to check the optimality of the direct solution. Numerical examples are included to demonstrate the method.

Dynamic optimization problems are usually solved by transforming them to nonlinear programming (NLP) problems with either sequential or simultaneous approaches. However, both approaches can still be inefficient to tackle complex problems. In addition, many problems in chemical engineering have unstable components which lead to unstable intermediate profiles during the solution procedure. If the numerical algorithm chosen utilizes an initial value formulation, the error from decomposition or integration can accumulate and the Newton iterations then fail as a result of ill-conditioned constraint matrix. On the other hand, by using suitable decomposition, through multiple shooting or simultaneous collocation, our algorithm has favorable numerical characteristics for both stable and unstable problems; by exploiting the structure of the resulting system, a stable and efficient decomposition algorithm results. In this paper, the new algorithm for solving dynamic optimization is developed. This algorithm is based on the nonlinear programming (NLP) formulation coupled with a stable decomposition of the collocation equations. Here solution of this NLP formulation is considered through a reduced Hessian successive quadratic programming (SQP) approach. The routine chosen for the decomposition of the system equations is COLDAE, in which the stable collocation scheme is implemented. To address the mesh selection, we will introduce a new bilevel framework that will decouple the element placement from the optimal control procedure. We also provide a proof for the connection of our algorithm and the calculus of variations.

We present a structured interior-point method for the efficient solution of the optimal control problem in model predictive control. The cost of this approach is linear in the horizon length, compared with cubic growth for a naive approach. We use a discrete-time Riccati recursion to solve the linear equations efficiently at each iteration of the interior-point method, and show that this recursion is numerically stable. We demonstrate the effectiveness of the approach by applying it to three process control problems.

We develop a numerically efficient algorithm for computing controls for nonlinear systems that minimize a quadratic performance measure. We formulate the optimal control problem in discrete-time, but many continuous-time problems can also be solved after discretization. Our approach is similar to sequential quadratic programming for finite-dimensional optimiza-tion problems in that we solve the nonlinear optimal control problem using sequence of linear quadratic subproblems. Each subproblem is solved efficiently using the Riccati difference equa-tion. We show that each iteration produces a descent direction for the performance measure and that the sequence of controls converges to a solution that satisfies the well-known necessary conditions for the optimal control. We also show that the algorithm is a Gauss-Newton method, which means it inherits excellent convergence properties. We demonstrate the convergence properties of the algorithm with two numerical examples.

Model predictive control requires the solution of a sequence of continuous optimization problems that are nonlinear if a nonlinear model is used for the plant. We describe briefly a trust-region feasibility-perturbed sequential quadratic programming algorithm (developed in a companion report), then discuss its adaptation to the problems arising in nonlinear model predictive control. Computational experience with several representative sample problems is described, demonstrating the effectiveness of the proposed approach.

One of the most effective numerical techniques for solving nonlinear programming problems is the sequential quadratic programming approach. Many large nonlinear programming problems arise naturally in data fitting and when discretization techniques are applied to systems described by ordinary or partial differential equations. Problems of this type are characterized by matrices which are large and sparse. This paper describes a nonlinear programming algorithm which exploits the matrix sparsity produced by these applications. Numerical experience is reported for a collection of trajectory optimization problems with nonlinear equality and inequality constraints.

The solution of discretized optimization problems is a major task in many application areas from engineering and science.
These optimization problems present various challenges which result from the high number of variables involved as well as
from the properties of the underlying process to be optimized. They also provide several strucures which have to be exploited
by efficient numerical solution approaches. In this paper we focus on partially reduced SQP methods which are shown to be
particularly well suited for this problem class. In practical applications the efficiency of this approach is demonstrated
for optimization problems resulting from discretized DAE as well as from discretized PDE. The practically important issues
of inexact solution of linearized subproblems and of working range validation are tackled as well.

For linear processes, QDMC has proven to be an effective way of systematically handling both input and output process constraints. This note describes an analogous extension of QDMC for nonlinear, constrained Processes. Based on Newton-type methods implemented in a moving horizon framework, a multi-step algorithm is derived that solves a single quadratic program (QP) over each time horizon. Sufficient stability properties can be invoked through the concept of descent directions, and these conditions can also be checked on-line. Finally, the method is illustrated on a small reactor example with state and input time delays.

A Riccati based approach is proposed to solve Linear Quadratic Optimal control problems subject to linear equality path constraints including mixed state-control and state-only constraints. The proposed algorithm requires computations that scale linearly with the horizon length. It can be used as the key sub-problem to build effective iterative methodologies that tackle general inequality constrained and nonlinear optimal control problems.

An active-set method is proposed for solving Linear Quadratic Optimal control problems subject to general linear inequality path constraints including mixed state-control and state-only constraints. The proposed algorithm uses a Riccati based approach to efficiently solve the equality constrained optimal control subproblems generated during the procedure and it is illustrated with two numerical examples.

Contenido: Introducción a la programación no lineal; Programación no lineal escasa y grande; Preliminares del control optimo; Problemas del control optimo; Ejemplos de control optimo.

We develop a numerically efficient algorithm for computing controls for nonlinear systems that minimize a quadratic performance measure. We formulate the optimal control problem in discrete-time, but many continuous-time problems can be also solved after discretization. Our approach is similar to sequential quadratic programming for finite-dimensional optimization problems in that we solve the nonlinear optimal control problem using sequence of linear quadratic subproblems. Each subproblem is solved efficiently using the Riccati difference equation. We show that each iteration produces a descent direction for the performance measure, and that the sequence of controls converges to a solution that satisfies the well-known necessary conditions for the optimal control.